More Than Meets the Eye

Every day we hear warnings not to open attachments, click on links, or enter our credentials into websites that do not look trustworthy.  But what if they do look legit?  How do we tell?

Our latest report shows not only the lengths to which an espionage operation will go to fool users, but it also provides a good example of how difficult it may be for the average user to discern one from the other.

Authored by the Citizen Lab’s Jakub Dalek, Geoffrey Alexander, Masashi Crete-Nishihata, and Matt Brooks, our report, entitled “Insider Information: An intrusion campaign targeting Chinese language news sites,” details a campaign of reconnaissance, phishing, and targeted malware at the heart of which are carefully-crafted mimics of several prominent Chinese-language news websites.

Our investigation began when staff members of China Digital Times — a popular China-focused news portal founded by UC-Berkeley professor and prominent human rights activist Xiao Qiang — began receiving unsolicited emails with promises of controversial material.  The emails contained a link to what appears to be the legit China Digital Times website. However, it is not.  The operators behind this campaign had copied the entire website and then hosted it on a slightly altered domain.  Instead of “” the operators used the domain “”

Can you spot the difference?  

If you noticed the substitution of “a” for “i” in the word digital, you are correct!

Other than the misspelled domain, the legitimate and fake news websites are identical, with one additional key difference: the operators also coded a few lines of javascript into the fake news domain that trigger a popup window asking the visitor to enter in their email and password into a fake WordPress login page.  Had the targets done so, they would have then been redirected back to the legitimate China Digital Times website, oblivious to the fact that their credentials to administer the website were successfully stolen by the operators, allowing them to effectively manage and edit the legitimate website itself.

By analyzing the server used to host the fake website, Citizen Lab researchers were also able to identify several other fake websites that used content from Chinese language news websites that the operators had also mimicked, presumably for phishing.  We also found that some of the servers controlled by the operators were used to stage malware.

It is noteworthy that all of the fake websites our researchers discovered in this campaign are meant to mimic news websites that publish content critical of the Chinese government.  It is possible the operators behind this campaign are “hackers for hire” — typical of the way in which a lot of cyber espionage is outsourced in China.  However, we are unable to positively attribute this campaign to a specific state agency.

I expect we will see more cases such as these in which legitimate news sites are doctored and manipulated to push disinformation or facilitate cyber espionage.  With each of us bombarded with data from social media on a daily basis, discerning “fake” from “real” or “malicious” from “benign” will become more ever more challenging and time-consuming. Cases such as these illustrate the importance of educating users, especially those working in high-risk areas such as investigative journalism, about the importance of integrating information security and digital hygiene into their daily routines.

One final note in this regard: hats go off to China Digital Times staff not only for spotting the malicious emails but also for sharing them with Citizen Lab for further analysis, which led to the discovery of the wider campaign.  Cooperation of this sort is essential for research to progress, and for journalists and the entire human rights community to be aware of the type of threats they mutually face.

Mexico Wages Cyber Warfare Against Journalists, and their minor children

For years, Citizen Lab has been sounding alarms about the abuse of commercial spyware. We have produced extensive evidence showing how surveillance technology, allegedly restricted to government agencies for criminal, terrorism, and national security investigations, ends up being deployed against civil society.

Today’s report not only adds to the mountain of such evidence, it details perhaps the most flagrant and disturbing example of the abuse of commercial spyware we have yet encountered.

Working with Mexican civil society partners R3D, Social Tic, and Article 19, our team — led by John Scott Railton — identified more than 75 SMS messages sent to the phones of 12 individuals, most of whom are journalists, lawyers, and human rights defenders. 10 are Mexican, one was a minor child at the time of targeting, and one is a US citizen.

These SMS messages contained links to the exploit infrastructure of a secretive Israeli cyber warfare company, NSO Group.  Had they been clicked on, the links would activate exploits of what were, at the time, undisclosed software vulnerabilities in the targets’ Android or iPhone devices.  Known in NSO Group’s marketing as “Pegasus”, this exploit infrastructure allows operators to surreptitiously monitor every aspect of a target’s device: turn on the camera, capture ambient sounds, intercept or spoof emails and text messages, circumvent end-to-end encryption, and track movements.

We first encountered NSO Group in August 2016 when UAE human rights defender Ahmed Mansoor shared with Citizen Lab researchers suspicious SMS messages he received containing links to NSO infrastructure. When we published our report on Mansoor, we had some evidence of targeting in Mexico that subsequently led to a follow-up report earlier this year on the use of NSO’s surveillance technology to target Mexican health advocates and food scientists.

The targeting we outline in our latest report, which runs from January 2015 to August 2016, involves a much wider campaign. It includes 12 individuals who share a common trait:  investigations into Mexican government corruption, forced disappearances, or other human rights abuses. All of the individuals who cooperated in our research consented to be named in the report. The August 2016 endpoint coincides with the time of our disclosure to Apple about NSO’s exploits, which led to the shutdown of NSO’s infrastructure (or at least that particular phase of it).  

Among the noteworthy aspects of this latest case are the persistent and brazen attempts by the operators to trick recipients into clicking on links.  Each of the targets received a barrage of SMS messages that included crude sexual taunts, alleged pictures of inappropriate, threatening, or suspicious behavior, and other ruses.  Many received fake AMBER Alert notices about child abductions as well as fake communications from the US Embassy in Mexico.

What is most disturbing is that the minor child of one of the targets — Emilio Aristegui, son of journalist Carmen Aristegui — received at least 22 SMS messages from the operators while he was attending school in the United States.  Presumably these attempts to infect Emilio’s phone were intended as a backdoor to his mother’s phone. But it is also possible the operators had a more sinister motivation.  The attempts to infect both Carmen and Emilio took place at the same time Carmen Aristegui was investigating a major corruption scandal involving the President of Mexico.

Our report makes it clear that the NSO Group, like competitor companies Hacking Team and FinFisher, is unable or unwilling to control the abuse of its products.  Time and again, companies like these, when presented with evidence of abuse, effectively pass the buck, claiming that they only sell to “government agencies” to use their products for criminal, counterintelligence, or anti-terrorism purposes.  The problem is that many of those government clients are corrupt and lack proper oversight; what constitutes a “crime” for officials and powerful elites can include any activity that challenges their position of power — especially investigative journalism.

Mexico is a case in point.  Ranked by the Economist’s Intelligence Unit as a “flawed democracy”, Mexico’s government agencies are riven with corruption.  Mexico is one of the most dangerous places to be a journalist not only because of violence related to the drug cartels but also because of threats from government officials.   As Reporters Without Borders notes, “[w]hen journalists cover subjects linked to organized crime or political corruption (especially at the local level), they immediately become targets and are often executed in cold blood.”

In spite of these glaring insecurity and accountability issues, the NSO Group went ahead and sold its products to multiple Mexican government agencies, according to leaked documents reported on in the New York Times.  Other leaked documents show that Mexico was at one time another commercial spyware company’s (Hacking Team) largest single country client.  Should it come as any surprise that these powerful surveillance technologies would end up being deployed against those who aim to expose corrupt Mexican officials?

What is to be done about these abuses? In a recent publication, Citizen Lab senior researcher Sarah McKune and I outlined a “checklist of measures” that could be taken to hold the commercial spyware market accountable, including application of relevant criminal law. It is noteworthy in this regard that while in the United States, the minor child Emilio Arestigui received SMS messages purporting to be from the US Embassy.  Impersonating the US Government is a violation of the US Criminal Code, and the targeting may very well constitute a violation of the US Wiretap Act.  At the very least, it is a violation of diplomatic norms.  How will the United States Government respond?

NSO Group is an Israeli company, and thus subject to Israeli law.  In the past, Israel has prided itself on strict export controls around commercial surveillance technology.  Yet this latest example shows yet again the ineffectiveness of those controls.  Will Israeli lawmakers tighten regulations around NSO Group in response?

Among the checklist of measures McKune and I identified is the importance of evidence-based research on the commercial spyware market to help track abuses and raise awareness.  It is important to underline that the work undertaken in this report could not have been done without the close collaboration between Citizen Lab researchers and Mexican civil society groups, R3D, SocialTic, and Article 19.   Collaborations like these are essential to exposing the negative externalities of the commercial spyware market, documenting its harms, and shedding light on abuse.

I suspect it will not be the last collaboration of this sort.

Read the full report, “Reckless Exploit: Journalists, Lawyers, Children Targeted in Mexico with NSO Spyware,” authored by John Scott-Railton, Bill Marczak, Bahr Abdulrazzak, Masashi Crete-Nishihata, and me, here:


From Russia, with Tainted Love

I am pleased to announce a new Citizen Lab report, entitled “Tainted Leaks: Disinformation and Phishing With a Russian Nexus.” The report is authored by the Citizen Lab’s Adam Hulcoop, John Scott-Railton, Peter Tanchak, Matt Brooks, and myself, and can be found here.

Our report uncovers a major disinformation and cyber espionage campaign with hundreds of targets in government, industry, military and civil society. Those targets include a large list of high profile individuals from at least 39 countries (including members of 28 governments), as well as the United Nations and NATO. Although there are many government, military, and industry targets, our report provides further evidence of the often-overlooked targeting of civil society in cyber espionage campaigns.  Civil society — including journalists, academics, opposition figures, and activists — comprise the second largest group (21%) of targets, after government.

Other notable targets include:

  • A former Russian prime minister
  • A former U.S. Deputy Under Secretary of Defense and a former senior director of the U.S. National Security Council
  • The Austrian ambassador to a Nordic country and the former ambassador to Canada for a Eurasian country
  • Senior members of the oil, gas, mining, and finance industries of the former Soviet states
  • United Nations officials
  • Military personnel from Albania, Armenia, Azerbaijan, Georgia, Greece, Latvia, Montenegro, Mozambique, Pakistan, Saudi Arabia, Sweden, Turkey, Ukraine, and the United States, as well as NATO officials
  • Politicians, public servants and government officials from Afghanistan, Armenia, Austria, Cambodia, Egypt, Georgia, Kazakhstan, Kyrgyzstan, Latvia, Peru, Russia, Slovakia, Slovenia, Sudan, Thailand, Turkey, Ukraine, Uzbekistan and Vietnam

While we have no “smoking gun” that provides definitive proof linking what we discovered to a particular government agency (a common challenge in open source investigations like ours) our report nonetheless provides clear evidence of overlap with what has been publicly reported by numerous industry and government reports about Russian cyber espionage. This overlap includes technical details associated with the successful breach in 2016 of the email account of John Podesta, the former chairman of Hillary Clinton’s unsuccessful presidential campaign.

As is often the case with Citizen Lab research on targeted threats, our report began with a “patient zero” — in this case, the prominent journalist, David Satter.  Satter is a well-known author on Russian autocracy. He was banned from Russia in 2013 for his investigative reporting on corruption and abuse of power associated with the Putin regime.  In October 2016, Satter’s Gmail account was successfully phished.  Documents stolen from his account then appeared on the website of CyberBerkut, a self-described pro-Russian hacktivist group.   Using the genuine documents obtained with Satter’s consent, our report details the disinformation campaign that was orchestrated around his stolen emails to give the false impression that Satter was part of a CIA-backed plot to discredit Putin and his adversaries and engineer a “colour revolution.”  The disinformation was also aimed at providing a false association between Satter, western NGOs, and prominent Russian opposition figures, most notably the prominent Russian anti-corruption activist, Alexei Navalny.

A very detailed technical analysis of the infrastructure and methods used in the phishing attack on Satter, led by Citizen Lab’s Adam Hulcoop, then allowed us to unravel and ultimately identify a much larger group of over 200 individuals across 39 countries targeted by the same operators.  Not since our Tracking Ghostnet report in 2009 do I recall us discovering such an extensive list of high-profile targets of a single cyber espionage campaign.

Why target civil society? For many powerful elites, a vibrant civil society is the antithesis to their corrupt aims.   In the case of Russia, the motivations behind cyber espionage are as much about securing Putin’s kleptocracy as they are geopolitical competition.  It often matters just as much for the Kremlin to know what critical exposé is going to be published on Putin’s inner circle, or what demonstration is going to be organized in the streets of St. Petersburg, as it does what happens in corporate boardrooms or government headquarters abroad. This means journalists, activists, and opposition figures — both domestically and around the world — bear a large burden of the spying.

Our report also offers a detailed glimpse of the new frontier of digital disinformation.  Tainted leaks, such as those analyzed in our report, present complex challenges to the public.  Fake information scattered amongst genuine materials — “falsehoods in a forest of facts” as Citizen Lab’s John Scott-Railton referred to them —  is very difficult to distinguish and counter, especially when it is presented as a salacious “leak” integrated with what otherwise would be private information.

Russia has a long history of experience with what is known as dezinformatsiya, going back even to Soviet times.  The prospect of a country with its superpower resources engaging in systematic “tainted leak” operations generated with data stolen by affiliated cyber criminal “proxy” groups is daunting.  Even more daunting is the prospect that the model of its success will breed similar campaigns undertaken by other governments.  To the extent it is both cheap and effective, and provides plausible deniability when outsourced to the shady underworld, it will almost certainly inspire other governments to follow suit.

With digital insecurity and data breaches now a pervasive and growing problem, it is highly likely digital disinformation operations are going to become widespread. Indeed, we could be on the cusp of a new era of superpower-enabled, digital disinformation.  The public’s faith in media (which is already very low), and the ability of civil society to do its job effectively, will both invariably suffer as collateral damage.

Our hope is that in studying closely and publishing the details of such tainted leak operations, our report will help us better understand how to recognize and mitigate them.  We also hope that in highlighting the large number of civil society members targeted in yet another cyber espionage campaign, the “silent epidemic” can be properly addressed by policymakers, industry, and others.

One final note concerning notification: we chose not to identify targeted or victimized individuals without their consent in order to protect their privacy.  Instead, we have notified the email service provider and relevant Computer Emergency Response Teams.

Report URL:


We Chat (But Not about Everything)

Imagine if your favourite social media application silently censored your posts, but gave you no information about what topics are censored.

Imagine if everything seemed fine as you posted message after message and image after image, for days on end with no issues, but then occasionally one of your posts would simply not appear without explanation.

And what if the messages or images you are prevented from posting sometimes seem connected with a controversial political issue, but other times not?  Perhaps it’s deliberate, you might guess. Perhaps it’s just you and your bad Internet connection?  Who can say for sure?

Unfortunately this Kafka-esque situation is the reality for well over a billion users of WeChat and Sina Weibo, two of China’s largest social media applications and among the largest in the world.

Our new report provides detailed evidence from systematic experiments we have been performing on WeChat and Sina Weibo to uncover censorship on each of the applications.  As with prior reports on each of the applications, we are interested in enumerating censored topics — a difficult question to answer since neither of the companies is transparent about what they block.

For our latest research, we focused on censorship of discussions about the so-called “709 Crackdown.” This crackdown refers to the nationwide targeting by China’s police of nearly 250 human rights lawyers, activists, as well as some of their staff and family, since July 9, 2015, when lawyers Wang Yu (王宇) and her husband Bao Longjun (包龙军) were forcibly “disappeared.”  The 709 Crackdown is considered one of the harshest systematic measures of repression on civil society undertaken by China since 1989, and is the subject of much ongoing international media and human rights discussion.  

Unfortunately, as our experiments show, a good portion of that discussion fails to reach Chinese users of WeChat and Weibo. Our research shows that certain combinations of keywords, when sent together in a text message, are censored. When sent alone, they are not.  So, for example, if one were to text 中国大陆 (Mainland China) or 王全璋的妻子 (Wang Quanzhang’s Wife) or 家属的打压 (Harassment on Relatives) individually, the messages would get through.  Sent together, however, the message would be censored.  The Citizen Lab’s Andrew Hilt’s has created a visualization showing these keyword combinations here:

In addition to a large number of censored keyword combinations our tests unearthed, we also discovered 58 images related to the 709 Crackdown that were censored on WeChat Moments for accounts registered with a mainland China phone number. (For accounts registered with a non-mainland China phone number, on the other hand, the images and keyword combinations go through fine). This is the first time we have documented censorship of images on a social media platform, and we are continuing to investigate the exact mechanism by which it takes place.

The purpose of Citizen Lab’s research on applications like WeChat and Weibo is to better understand and bring transparency to restrictions such as these. We live in a world in which our choices and decisions are increasingly determined by algorithms buried in the applications we use.  What websites we visit, with whom we communicate, and what we say and do online are all increasingly determined by these code-based rules.  Whether those algorithms are fair or not, whether they respect human rights, whether they make mistakes or not, are all questions that can only be answered if the algorithms can be properly examined.

Unfortunately, many social media hide their algorithms, either for proprietary and financial reasons (they want to protect the “secret sauce” that earns them money) or for political reasons (their algorithms are used to enforce restrictions on speech and they don’t want their customers to know about it).  Our research aims to break through that obfuscation and bring such algorithms to account.

Generally speaking, the algorithms that drive social media censorship or surveillance can operate in one of two ways: either on the client side — meaning, inside the application on your device; or on the server side — meaning, inside one of the company’s computers that runs the service.  Typically, to investigate the former, we rip the application apart — “reverse engineer” it — and subject it to various tests to determine what the algorithm does beneath the surface.

For server-side rules, on the other hand, whatever censorship or surveillance is going on happens inside the company’s infrastructure, making it more challenging to interrogate the rules.  Both WeChat and Weibo perform censorship and surveillance on the server side, so we had to undertake detailed experiments using combinations of keywords and images drawn from news stories and fed into the applications systematically to zero in on what’s filtered.  You can read about these experiments in the full report here:

Our report serves as a reminder that for a large portion of the world, social media act as gatekeepers of what they can read, speak, and see. When they operate in a repressive environment like China, social media can end up surreptitiously preventing important political topics from being discussed.  Our finding that WeChat is now also systematically censoring images as well as text opens up the daunting prospect of multi-media censorship and surveillance on social media.

Taming the “Wild West” Commercial Spyware Market

Today, my colleague Sarah McKune and I co-authored an article, entitled “Who’s Watching Little Brother? A Checklist for Accountability in the Industry Behind Government Hacking.”  A blog post about the report can be found here, and the article is available in PDF here.

The report outlines a “checklist” for regulating the commercial spyware market.  As we have reported on numerous occasions as part of Citizen Lab’s research, there is ample evidence of growing abuses surrounding the commercial spyware market. In spite of the pledges made by some in the industry — that self-regulation works, that they are just following “local laws” — we have shown how companies like Finfisher, Hacking Team, and NSO Group supply their products and services to governments that use them to target journalists, human rights defenders, and even anti-obesity activists. We have tracked the proliferation of some of these services to some of the world’s most autocratic regimes.  It is obvious that these abuses are going to grow unless something is done to mitigate these trends.

Unfortunately, debate until now about what to do about these abuses has revolved in binary form around either export controls or an unregulated wild west.  In our article, we develop instead a checklist for a “web of constraints” around the industry that involves multiple strategies and different mechanisms, including application of existing laws.  We hope that these checklist provides a helpful roadmap for policymakers and others who want to do something about the excesses of this industry and we look forward to feedback.

Read the article here: [PDF]



Mexico, NSO Group, and the Soda Tax

I am pleased to announce a new Citizen Lab report, entitled “Bitter Sweet: Supporters of Mexico’s Soda Tax Targeted With NSO Exploit Links,” authored by John Scott-Railton, Bill Marczak, Claudio Guarnieri, and Masashi Crete-Nishihata.

The full report is here:

New York Times has an exclusive here:

In recent years, the research of the Citizen Lab and others has revealed numerous disturbing cases involving the abuse of commercial spyware: sophisticated products and services ostensibly restricted in their sales to government clients and used solely for legitimate law enforcement.

Contrary to what companies like Hacking Team, Gamma Group, NSO Group and others claim about proper industry self regulation, we have repeatedly uncovered examples where governments have used these powerfully invasive tools to target human rights defenders, journalists, and legitimate political opposition.

To this list, we can now add research scientists and health advocates.

The “Bitter Sweet” case has its origins in a prior Citizen Lab investigation — our Million Dollar Dissident report, in which we found that a UAE-based human rights defender, Ahmed Mansoor, was targeted by UAE authorities using the sophisticated “Pegasus” spyware suite, sold by Israeli cyber warfare company, NSO Group.

As part of that report, we published technical indicators — essentially digital signatures associated with the NSO Group’s infrastructure and operations — and encouraged others to use them to find evidence of more targeting.  When we published our report in August 2016, we knew there was at least one Mexican targeted — a journalist — and so suspected there might be some targeting there.

Shortly after the publication of our report, Citizen Lab was contacted by Access Now, which had received a request for assistance on its digital helpline from two Mexican NGOs working on digital rights and security, R3D and SocialTIC.  Together, we worked to track down suspicious messages received by Mexicans, which led us to the Bitter Sweet case.

The title of our report refers to the fact that all of those whom we found targeted in this campaign were involved in a very high-profile “soda tax” campaign in Mexico. A soda tax is part of an anti obesity effort to add taxes to lower consumption of sugary drinks and sodas.  Although many in Mexico are behind the campaign, some in the beverage industry and their stakeholders are obviously not.

In the midst of controversy around the soda tax campaign, at least three prominent research scientists and health advocates received similar (in some cases, identical) suspicious SMS messages that included telltale signs of NSO Group’s attack infrastructure. Had any of them clicked on the links, their iPhones would have been silently compromised, allowing the perpetrators to listen in on their calls, read their emails and messages, turn on their camera, and track their movements — all without their knowledge.

What is most remarkable about the targeting are the steps the perpetrators took to try to trick the scientists and advocates to click on the links.  For example, one of the targets, Dr. Simon Barquera, a well respected researcher at the Mexican Government’s Instituto Nacional de Salud Pública, received a series of increasingly inflammatory messages.  The first SMSs concerned fake legal cases in which the scientist was supposedly involved.  Those following got more personal: a funeral, allegations his wife was having an affair (with links to alleged photos), and then, most shocking, that his daughter, who was named in the SMS, had been in an accident, was in grave condition, and that Dr. Barquera should click a link to see which hospital emergency room into which she was admitted.

While we can’t attribute this campaign to a particular company or government agency, it is obvious those behind the targeting have a stake in getting rid of the soda tax, and that points to the beverage industry and their investors and backers in the Mexican government. It is important to point out that Mexico is on record purchasing NSO Group’s services and NSO Group itself asserts it only sells to legitimate government representatives.  But clearly the NSO’s “lawful intercept” services are not being used in Mexico to fight crime or hunt terrorists, unless those who are advocating against obesity are considered criminal terrorists. We feel strongly that both the Mexican and the Israeli governments (the latter approves exports of NSO products) undertake urgent investigations.

Finally, our report shows the value of careful documentation of suspicious incidents, and ongoing engagement between researchers, civil society organizations, and those who are targeted by malicious actors who wish to do harm.  The epidemic of targeted digital attacks facing civil society will require an all-of-society defence.  The cooperation shown on this investigation by Citizen Lab researchers, Access, R3D, and SocialTIC is a model of how it can be done.

The Easy and Affordable Way to Undertake Cyber Espionage

I am pleased to announce a new Citizen Lab report, entitled “Nile Phish: Large-Scale Phishing Campaign Targeting Egyptian Civil Society,” authored by the Citizen Lab’s John Scott-Railton, Bill Marczak, and Etienne Maynier, in collaboration with Ramy Raoof of the Egyptian Initiative for Personal Rights.

The full report is here:

When most of us think of state cyber espionage, what likely comes to mind are extraordinary technological capabilities: rare un-patched software vulnerabilities discovered by teams of highly skilled operators, or services purchased for millions from shadowy “cyber warfare” companies.  To be sure, some cyber espionage fits this description, as any perusal through the Snowden disclosures or our recent “Million Dollar Dissident” report will show. But not all of them do.  More often than not, cyber espionage can be surprisingly low-tech and inexpensive, and yet no less effective, than the glitzy stereotypes.

The Egyptian “Nile Phish” campaign is a case in point.

An authoritarian country racked with domestic insecurity and political turmoil, the Egyptian government has mounted a growing crackdown on civil society.  Part of that crackdown involves investigations of alleged “foreign funding” of Egyptian NGOs — known within Egypt as “Case 173.”

Beginning in November 2016, Egyptian NGOs and their staff under Case 173 investigation simultaneously began receiving identical, legitimate looking emails in their inboxes.  Fortunately, technical staff at one such NGO, the Egyptian Initiative for Personal Rights, suspected something wasn’t right, and reached out to us at the Citizen Lab for further investigation.

With EIPR’s assistance, we began analyzing the suspicious emails and discretely contacting other Egyptian organizations and individuals who received them.  What we discovered was an elaborate, coordinated, and multi-phased “phishing” campaign in which legitimate looking emails are sent to unsuspecting users in an attempt to trick them into entering their passwords into fraudulent websites controlled by the operators.

If this type of activity sounds familiar, it is because phishing is widely used as a tactic in the world of everyday cyber crime.  Just yesterday, I received a warning from the University of Toronto’s IT support unit about a malicious email sent to faculty and staff with a notice about a non-existent “Campus Security Notification.”  It may also sound familiar because it was precisely this type of phishing tactic that Russian hackers used to compromise the gmail account of the chairman of the 2016 Hillary Clinton campaign, John Podesta (illustrating the principle that even Great Powers sometimes pick cheap seats as long as it gets them where they want to go).

In the case of #NilePhish, Egyptian NGOs and individuals received emails with an invitation to attend a workshop about Case 173.  The operators used language from a real NGO statement that had been circulating among the community, and included as co-sponsors some of the very NGOs that were targeted.  A second wave of phishing emails included what purported to be a list of individuals subject to a travel ban under Case 173 (who among Egyptian civil society wouldn’t be tempted to check if they were included on that list?).  Alongside these carefully crafted emails — and seemingly just to mix things up — generic phishing attempts were sent with email security or fake courier delivery notifications.

Led by John Scott Railton, our team analyzed the emails and the server infrastructure in detail.  Dozens of fake but legitimate sounding domains were used by the operators to host websites that appeared to be Dropbox login pages or Gmail “failed login” warning messages.  Emails were sent from addresses like fedex_tracking[@] and dropbox.notfication[@]

Because of mistakes made on the part of the attackers, and our team’s use of multiple data sources and methods that are outlined in the report, we were able to eventually link more than 90 messages sent to seven NGOs and individuals as part of a single concerted campaign.  While we were unable to definitively attribute the campaign to an Egyptian government agency, strong circumstantial evidence exists that support it.  For example, we observed phishing against the colleagues of the Egyptian lawyer Azza Soliman, within hours of her arrest in December 2016. The phishing claimed to be a copy of her arrest warrant.  It is highly unlikely a random cyber criminal would be privy to such details, but quite likely someone connected to her arrest is.

Phishing may be an example of “poor man’s” cyber espionage, but the reason it’s used by everyone from Ukrainian securities fraudsters to Russian hackers to para-state groups is because it works.   From a government perspective, why bother with expensive wire transfers, complicated end user license agreements, third party resellers, and export controls, when a handful of cleverly constructed emails and websites will do the job?

The flip side is that there are cheap and easy ways to defend against phishing: users can be educated not to click on links or open emails that look legitimate and to spot giveaways of their malicious nature; tech companies can put in place two-factor authentication for access to their services by default; and NGOs can employ dedicated technologists who can manage their networks and alert their staff to the latest alerts.

Fortunately for Egyptian civil society, EIPR is just such an organization.

#NilePhish is ongoing, and we strongly suspect that there may be other targets of this campaign we have not yet identified.  We hope that the detailed indicators we are publishing can be used by systems administrators and others to find more evidence of targeting and alert potential victims.

Read the full report here:

Read EIPR’s report on #NilePhish in Arabic:

The DHS/FBI Report on Russian Hacking was a Predictable Failure

Russian cyber espionage against American political targets has dominated the news in recent months, intensifying last week with President Barack Obama’s announcement of sanctions against Russia.

Cyber espionage is, of course, nothing new. But using data collected in cyber espionage operations to interfere in the U.S. election process on behalf of one of the candidates — one who appears to be smitten with Russian President Vladimir Putin — is a brazen and unprecedented move that deserves a firm political response from the U.S. government on behalf of the public interest.

The expulsion of 35 Russian diplomats, the shutting down of two Russian-owned estates the US claims were used for intelligence activities, and the targeted financial sanctions on Russian individuals and organizations all show the Obama administration understands at least part of what such a firm response should entail.

Unfortunately, the White House was unable to produce the most critical part for the credibility of their action: that to be politically effective in today’s Internet age, such a response also needs to be backed up with solid evidence. Here, the administration failed miserably, but also predictably. And it’s not necessarily because it doesn’t have the evidence. Instead, the U.S. government simply failed to present it.

My latest piece is an analysis of the DHS/FBI report on Russian cyber espionage, published in Just Security.  Read the entire piece here:

WeChat: “One App, Two Systems”

Days are long gone when we used to interact with the Internet as an undifferentiated network. The reality today is that what we communicate online is mediated by companies that own and operate the Internet services we use.  Social media in particular have become, for an increasing number of people, their windows on reality.  Whether, and in what ways, those windows might be distorted — by corporate practices or government directives — is thus a matter of significant public importance (but not always easy to discern with the naked eye).

Take the case of WeChat — the most popular chat application in China, and the fourth largest in the world with 806 million monthly active users.  WeChat is more than just an instant messaging application. It is more like a lifestyle platform.  WeChat subscribers use the app not only to send text, voice, and video but to play games, make mobile payments, hail taxis, and more.

As with all other Internet services operating in China, however, WeChat must comply with extensive government regulations that require companies to police their networks and users, and share user data with security agencies upon request.  Over numerous recent case-study reports, Citizen Lab research has found that many China-based applications follow these regulations by building into their applications hidden keyword censorship and surveillance.  WeChat is no exception, although with a twist.

Today, we are releasing a new report, entitled “One App, Two Systems: How WeChat uses one censorship policy in China and another internationally.  For this report, we undertook several controlled experiments using combinations of China, Canada, and U.S. registered phone numbers and accounts to test for Internet censorship on WeChat’s platform.  What we found was quite surprising.

Turns out that there is substantial censorship on WeChat, but split along several dimensions.  There is keyword filtering for users registered with a mainland China phone number but not for those registering with an international number.  However, we also found that once a China-based user had registered with a mainland China phone number, the censorship follows them around — even if they switch to an international phone number, or work, travel, or study abroad.  To give some context, there are roughly 50 million overseas Chinese people working and living abroad.  China’s “One-App, Two Systems” keeps them under the control of China’s censorship regime no matter where they go. This extra-territorial application of information controls is quite unique, and certainly a disturbing precedent to set.

We also found censorship worked differently on the one-on-one versus the “group” chat systems.  The latter is a WeChat feature that allows chat groups of up to 500 users.  Our tests found censorship on the group chat system was more extensive, possibly motivated by the desire to restrict speech that might mobilize large groups of people into some kind of activism.  There is also censorship of WeChat’s web browser — but, again, mostly for China-registered users.

Finally, and most troubling, we found that WeChat no longer gives a notice to users about the blocking of chat messages.  In the past, users received a warning saying they couldn’t post a message because it “contains restricted words.” Now if you send a banned keyword, it simply doesn’t appear on the recipient’s screen. It’s like it never happened at all.  This type of “silent” censorship is highly unlikely to be noticed by either communicating party unless one of them thinks to double check (or researchers like us scrutinize it closely).

By removing notice of censorship, WeChat sinks deeper into a dark hole of unaccountability to its users.

Research of this sort is essential because it helps pull back the curtain of obscurity that, unfortunately, pervades so much of our digital experiences.  As social media companies increasingly shape and control what users communicate — shape our realities — they affect our ability to exercise our rights to seek and impart information — to exercise our human rights.

China may offer the most extreme examples, as our series of reports on China-based applications has shown, but they are important to study as harbingers of a possible future.  To wit, as our report is going to publication Facebook is reportedly developing a special censorship system to comply with China’s regulations, one that would “suppress posts from appearing in users’ news feeds.”  Along with WeChat’s “One App, Two Systems” model, the services these two social media giants are offering go a long way to cementing a bifurcated, territorialized, and opaque Internet.

Read the full report here:

What to do about “dual use” digital technologies?

The following is my written testimony to the Senate Standing Committee on Human Rights – Canada, which will take place November 30, 2016 at 11:30 AM EST and video webcast here.)*


For over a decade, the Citizen Lab at the Munk School of Global Affairs, University of Toronto has researched and documented information controls that impact the openness and security of the Internet and threaten human rights. Our mission is to produce evidence-based research on cyber security issues that are associated with human rights concerns. We study how governments and the private sector censor the Internet, social media, or mobile applications.  We have done extensive reporting on targeted digital espionage on civil society.  We have produced detailed reports on the companies that sell sophisticated spyware, networking monitoring, or other tools and document their abuse potential to raise corporate social responsibility concerns.  And we have undertaken extensive technical analysis of popular applications for hidden privacy and security risks. Our goal is to inform the public while meeting high standards of rigor through academic peer review.

Citizen Lab Research into Dual-Use Technologies

One area we are particularly concerned with is the development, sale and operation of so-called “dual-use” technologies that provide capabilities to surveil users or to censor online information at the country network level. These technologies are referred to as “dual-use” because, depending on how they are deployed, they may serve a legitimate and socially beneficial purpose, or, equally well, a purpose that undermines human rights.   

Our research on dual-use technologies has fallen into two categories — those that involve network traffic management, including deep packet inspection and content filtering, and those that involve technologies used for device intrusion for more targeted monitoring.  

The first category of our research concerns certain deep packet inspection (DPI) and Internet filtering technologies that private companies can use for traffic management, but which can also be used by Internet service providers (ISPs) to prevent entire populations from accessing politically sensitive information online and/or be used for mass surveillance. This category of research uses a combination of network measurement methods, technical interrogation tests, and other “fingerprinting” techniques to identify the presence on national networks of such technologies capable of surveillance and filtering, and, where possible, the company supplying the technology. In conducting such research, questions frequently arise regarding the corporate social responsibility practices of the companies developing and selling this technology, as several of our reports in this area have identified equipment and installations sold by companies to regimes with dubious human rights track records. Our research has spotlighted several companies — Blue Coat, Websense, Fortinet, and Netsweeper — that provide filtering and deep packet inspection systems to such rights-abusing countries.  Since Netsweeper is a Canadian headquartered company and has featured repeatedly in our research on this topic, I will provide more details about our findings with respect to them.

Netsweeper, Inc. is a privately-owned technology company based in Waterloo, Ontario, Canada, whose primary offering is an Internet content filtering product and service. The company has customers ranging from educational institutions and corporations to national-level Internet Service Providers (ISPs) and telecommunications companies. Internet filtering is widely used on institutional networks, such as schools and libraries, and networks of private companies, to restrict access to a wide range of content. However, when such filtering systems are used to implement state-mandated Internet filtering at the national level, questions around human rights — specifically access to information and freedom of expression — are implicated.

Prior research by the OpenNet Initiative (2003-2013) (an Inter-University project of which Citizen Lab was a founding partner), identified the existence of Netsweeper’s filtering technology on ISPs operating in the Middle East, including Qatar, United Arab Emirates (UAE), Yemen, and Kuwait. Working on its own, Citizen Lab subsequently outlined evidence of Netsweeper’s products on the networks of Pakistan’s leading ISP, Pakistan Telecommunication Company Limited (PTCL), in a report published in 2013, and discussed their use to block the websites of independent media, and content on religion and human rights. In 2014, we reported that Netsweeper products were being used by three ISPs based in Somalia, and raised questions about the human rights implications of selling filtering technology in a failed state. In a report on information controls in Yemen in 2015, we examined the use of Netsweeper technology to filter critical political content, independent media websites, and all URLs belonging to the Israeli (.il) top-level domain in the context of an ongoing armed conflict in which the Houthi rebels had taken over the government and the country’s main ISPs.  Most recently, we published a report on September 21, 2016 that identified Netsweeper installations on nine Bahrain-based ISPs, a country with a notoriously bad human rights record, which were being employed to block access to a range of political content.

Included in some of these reports were letters with questions that we sent to Netsweeper, which also offered to publish any response from the company in full. Aside from a defamation claim filed in January 2016, and then subsequently discontinued in its entirety on April 25, 2016, Netsweeper has not responded to us.

The second category of research where we also apply the term “dual-use” concerns the use of malicious software — “malware” — billed as a tool for “lawful intercept,” e.g. zero-day exploits and remote access trojans that enable surveillance through a user’s device.  A “zero-day” — also known as an 0day — is an undisclosed computer software vulnerability.  Zero days can be precious commodities, and are traded and sold by black, grey, and legitimate market actors.  Law enforcement and intelligence agencies purchase and use zero days or other malware — typically packaged as part of a suite of “solutions” — to surreptitiously get inside a target’s device.  When used without proper safeguards, these tools (and the services that go along with them) can lead to significant human rights abuses.

Our work in this area typically begins with a “patient zero” — someone or some organization that has been targeted with a malware-laden email or link.  In the course of the last few years, we have documented numerous cases of human rights defenders and other civil society groups being targeted with advanced commercial spyware sold by companies like Italy-based Hacking Team, UK/Germany/Swiss-based Finfisher, and Israeli-based NSO Group.  Using network scanning techniques that employ digital fingerprinting for signatures belonging to the so-called “command and control” infrastructure used by this malware, we have also been able to map the proliferation of some of these systems to a large and growing global client base, many of which are governments that have notoriously bad records concerning human rights.

The data released by Citizen Lab from these projects has inspired legal and advocacy campaigns, formed much of the evidentiary basis for measures undertaken in multiple countries to control unregulated surveillance practices (e.g., 2013 modifications to the Wassenaar Arrangement), has inspired further disclosures and investigations regarding the use of spyware and filtering technologies, and has resulted in specific remediation in the form of software updates to entire consumer populations (e.g., patches to Apple’s OSX and iOS in the case of our “Million Dollar Dissident” report).

Nonetheless, our findings are only touching on a small area of what is a very disturbing larger picture.  The market for dual-use technologies, particularly spyware, is growing rapidly. Government demand for these technologies may actually be increasing following the Snowden disclosures, which raised the bar on what is deemed de rigueur in digital surveillance, and ironically may have intensified competition around the sale of zero-day exploits, and methods for defeating increasingly pervasive end-to-end encryption and other defensive measures. For example, the U.K.’s proposed Investigatory Powers Bill, at the time of writing awaiting Royal assent before becoming law, will authorize U.K. agencies to hack into targeted devices as well as “bulk networks” — meaning all devices associated with a particular geographic area.

Although Citizen Lab research has not to date identified a Canadian-based vendor of commercial spyware selling to a rights-abusing country or being used to target human rights defenders in the course of its investigations, we know that companies selling this type of technology exist.  Furthermore, the growth of the spyware market coupled with the other circumstances outlined above, suggest it is highly likely that a Canadian vendor would at some point in the not too distant future face the choice of whether or not to sell its technology and services to a rights-abusing country — if it has not already.  Indeed, it is worth pointing out that parts of a very controversial mass surveillance system implemented in Turkey by the US-based company, Procera, were reportedly outsourced to a Canadian software development company, Northforge, after engineers at Procera threatened to resign for fear of assisting President Erdogan’s draconian policies.

What is To Be Done?

Rectifying the abuse of dual-use technologies is not a simple matter, but it is one where the Government of Canada can play a constructive role. Effective solutions that encourage respect for human rights will depend on two key components: transparency of the market, and creation of an incentive structure to which private sector actors will respond.  


The primary impediment to any progress regarding dual-use technologies of concern is the lack of transparency in the market. It is impossible for non-governmental entities to accurately gauge the scale and capabilities of the dual-use technology sector. While research such as that of the Citizen Lab and Privacy International has drawn attention to the problem and highlighted certain notorious companies, sources of research data and our capacity to undertake research are limited.  Meanwhile, new actors and technologies are regularly emerging or undergoing transformation as they change ownership, headquarters, or name. Many dual-use technology companies are not transparent about the full range of products and services they sell or their clients, and the sector as a whole is shrouded in secrecy.

With their proven potential for abuse, technologies that enable countrywide Internet filtering and digital surveillance merit increased scrutiny by the government and the public. It is telling that in many countries, government officials themselves are unable to obtain a complete picture of the technologies designed, manufactured, and serviced within their borders that could be used to suppress legitimate dissent or undermine other internationally-recognized human rights. Irrespective of whether the government chooses to regulate the sale of particular technologies, some form of mandated transparency in the market for filtering and surveillance tools is essential to addressing this information gap and informing good policy.

Mandated transparency could take a number of forms, but at a minimum will require “lawful intercept,” Internet filtering, and, possibly, DPI providers that offer their products and services in the marketplace to self-identify and report as a matter of public record. An analogous model may be found in the work of the United Nations Working Group on Mercenaries, which has drafted a proposed convention regarding regulation of private military and security companies (PMSCs). The convention envisions a general state registry of the PMSCs operating in a state’s jurisdiction, as part of a broader framework for oversight and accountability.

Transparency can emerge from research. It is noteworthy that the little we know about the abuse of dual-use technologies comes primarily from rigorous, evidence-based and interdisciplinary research of the sort Citizen Lab has done. As a complement to mandated transparency, the Government of Canada could encourage this type of mixed methods research into the dual-use technology market through research funding bodies like SSHRC and NSERC, and the Canada Research Chair program. It could also develop legislation specifically designed to provide safe harbor for security research undertaken in the public interest and incorporating responsible disclosure.

Incentivizing the Private Sector to Respect Human Rights

As the UN Guiding Principles on Business and Human Rights make clear, business enterprises have the responsibility to respect internationally-recognized human rights, in their own activities as well as activities linked to their operations, products or services. At present, however, there are few if any costs incurred by the companies that supply and service dual-use technologies when such technologies are used to violate human rights. Repeatedly we have seen that, when surveillance and filtering technologies are used against journalists, activists, and other peaceful actors, the companies involved treat the matter as “water off a duck’s back”: they assert that their products are provided for lawful purposes only, benefit society, and are beyond their control in the hands of their clients. They wait for the news cycle to pass. Many companies, particularly those that supply lawful intercept products, are further insulated by the secrecy surrounding intelligence and law enforcement work and the national security prerogatives of their clientele, most of whom lack oversight or public accountability themselves.

Yet it has become increasingly clear, as evidenced by Citizen Lab and other research, that while these technologies may be used to hunt criminals and terrorists or otherwise serve a legitimate security purpose, they are simultaneously deployed against regime critics, political opponents, and other non-violent actors with alarming frequency. Regimes that lack robust rule of law and due process while facing legitimation crises and domestic dissent simply do not distinguish among targets when leveraging the advanced technologies supplied by the private sector. It has come to light that private companies may even have detailed knowledge of attacks against civil society that are reliant on their products, as they participate in trouble-shooting delivery of malware and provide other forms of expertise to their clients. Companies, however, have managed to continue to grow and develop the sector without consequence by avoiding any form of engagement on the question of human rights.

Significant intervention is required to eliminate company expectations of immunity and prompt rights-based reform. In a forthcoming piece, my colleague Sarah McKune and I lay out several areas that we feel could help control the excesses of the commercial spyware market, by shifting the costs from the public to the spyware companies themselves, in order to generate changes in company risk-opportunity calculations, practices, and overall attitude. The drastic change in incentive structure necessary to curb the abuses of this industry will rely on a combination of (1) regulation and policy, and (2) access to remedy.

  1. Regulation and policy

Export controls are a first step in the regulatory process. The Canadian government currently has in place export controls and regulations against the sale of certain types of technologies to certain foreign jurisdictions, including those relating to “IP network communications surveillance systems or equipment” and “intrusion software” (which correspond to a large degree to the Citizen Lab research outlined above). The inclusion of these two new additions to control lists was in response to modifications made in 2013 to the Wassenaar Arrangement, of which Canada is a member. Canada has released statistics concerning 2015 export licenses including those pertaining to intrusion software and IP network surveillance, which can be found here.  Although it is impossible to know what items in particular were granted licenses or what considerations were made in doing so, it is noteworthy that within the relevant category, 2202 license applications were granted, while only 2 were denied. Regardless, export controls by themselves are insufficient to address the human rights concerns associated with these items.

As various members of the Wassenaar Arrangement rolled out implementation of the 2013 controls at the national level, the challenges of relying on export controls to address the serious rights implications of dual-use technologies became evident. One key problem is designating the scope of the items to be controlled in an appropriate and predictable manner, avoiding both over- and under-inclusion. For example, with respect to items related to “intrusion software,” certain technologies anticipated to fall within the scope of the control are also used for legitimate security research. At the same time, the 2013 controls do not cover Internet filtering and other technologies with significant human rights implications. For example, companies that provide Internet traffic management under the term “Quality of Service” (QoS) are explicitly excluded from Wassenaar targeted items. Yet, while QoS technologies are certainly integral to the proper functioning of network traffic service delivery today, they can also be used to throttle traffic or certain protocols associated with specific applications. If used in contexts where the aim is to limit free expression, privacy, or access to information — as evidenced in a rising number of troubling country cases — then human rights considerations are certainly impacted.

Lastly, the Wassenaar Arrangement’s inclusion of the 2013 controls is now on uncertain ground after the United States has given notice that it intends to renegotiate the agreement following major criticisms put forward primarily by security researchers and the private sector. The U.S. decision to reopen negotiations on these Wassenaar controls will, in turn, almost certainly affect Canada’s obligations.  

A second challenge lies in the export licensing process carried out at the national level. Even when a dual-use technology is subject to control, the licensing process must be properly calibrated to address the end users and end uses of concern from a human rights perspective. This accounting requires an ever-evolving assessment, combined with the political will to both curb access within a broad group of countries (some of which may be of strategic importance to Canada) and restrict the sales of domestic corporations. As we have witnessed, the post-2013 licensing processes surrounding spyware have left much to be desired: Italian authorities’ approved an initial grant of a “global authorization” to Hacking Team, which permitted it to export its spyware to destinations such as Kazakhstan; and, the Israeli authorities gave approval to NSO Group to export sophisticated iOS zero-day exploits to the United Arab Emirates, where we discovered they were subsequently used against a peaceful dissident and other political targets.

For these and other reasons, export controls, while important, constitute only one means by which the Government of Canada can help constrain the abuse of dual-use technologies. In tailoring applicable export controls, Canada can certainly take a proactive stance on addressing the end users and end uses that pose human rights risks. At the same time, however, such efforts can be complemented by additional regulatory and policy measures. Measures worth exploring include:

  • Government procurement and export credit or assistance policies that require vendors of dual-use technologies to demonstrate company commitment to and record of human rights due diligence. Vendors that have engaged in fraudulent or illegal practices, or have supplied technology that has facilitated human rights abuses, should be ineligible for award of government contracts or support in any form.
  • Enhanced consumer protection laws and active efforts at consumer protection agencies to address the misuse of DPI, Internet filtering technology, and spyware against the public.
  • A regulatory framework for oversight and accountability specifically tailored to dual-use technologies. That proposed in the context of PMSCs, as noted above, offers a number of elements that could be considered for inclusion, such as enumerating prohibited activities; establishing requirements for training of personnel; assessing company compliance with domestic and international law; and investigating reports of violations.
  • Structured dialogue with companies and civil society regarding the establishment of industry self-regulation, which can be modeled on the International Code of Conduct for Private Security Service Providers and its multistakeholder association. Such a dialogue could include work on model contracts and best practices for “lawful intercept” and Internet filtering technology providers.

(2) Access to remedy

When dual-use technology companies provide products and services used to undermine human rights, or when they engage in practices that are fraudulent or illegal in relevant jurisdictions (e.g., practices that are violative of intellectual property, consumer protection, privacy, or computer crime laws), it is appropriate that those harmed by such activity may seek remedy against them. Canadian law could ensure that criminal or civil litigation is possible in such circumstances, including through the clear establishment of jurisdiction over actors that operate transnationally or may be state-linked. Exposure to liability for misconduct will be the primary motivating force behind any change in this sector.

The Government of Canada is a vocal supporter of Internet freedom and human rights, and is a member in all of the relevant international bodies in which such topics are discussed.

But the fact that Citizen Lab has documented at least seven countries whose national ISPs use or have used a Canadian company’s services to censor Internet content protected under internationally-recognized human rights agreements is an embarrassing black mark for all Canadians. While we have no evidence that a Canadian intrusion software, DPI, or IP monitoring vendor has sold its services to a rights-abusing country that does not necessarily mean it has not happened, or will not happen in the future.  The Turkey-Procera case, outlined earlier, should certainly raise alarm bells.

By proactively addressing the regulation of dual-use technologies in ways outlined above, the Government of Canada would align its actions with its words, and ensure business considerations are not undertaken without human rights concerns being addressed.

*The author gratefully acknowledges the input of Sarah McKune, Senior Legal Advisor, Citizen Lab, who assisted in the preparation and writing of this testimony and John Scott Railton, Citizen Lab senior researcher, for comments and feedback.