Earlier today, the New York Times reported that the National Security Agency has secretly expanded its role in domestic cybersecurity. In short, the NSA believes it has authority to operate a warrantless, signature-based intrusion detection system—on the Internet backbone.1
Owing to the program’s technical and legal intricacies, the Times-ProPublica team sought my explanation of related primary documents.2 I have high confidence in the report’s factual accuracy.3
Since this morning’s coverage is calibrated for a general audience, I’d like to provide some additional detail. I’d also like to explain why, in my view, the news is a game-changer for information sharing legislation.
In the sharing economy, you can hire a one-off driver (Uber), courier (Postmates), grocery shopper (Instacart), housekeeper (Homejoy), or just about any other variety of henchman (TaskRabbit). So, what about hiring a hacker?
That’s the premise of Hacker’s List, a website launched in November. Anyone can post or bid on a hacking project. Hacker’s List arranges secure communication and payment escrow.
An online black market is, to be sure, nothing new. The rise and fall of the Silk Road received extensive media coverage.
What’s unusual about Hacker’s List is that it, purportedly, isn’t a black market. The website is public, projects and bids are open (albeit pseudonymous), and the owner has identified himself. (He runs a small security firm in Denver.) Hacker’s List was even featured on the front page of the New York Times.
Out of curiosity, I decided to leverage this openness. Who tries to hire a hacker? Is the website as popular as its owner claims? Most importantly, does the website facilitate illegal transactions, or solely white hat hacking?
To answer these questions—and, admittedly, to procrastinate on my dissertation—I cobbled together a crawler. You can find the source on GitHub, and the crawl data on Google Docs.
Here’s the short version: most requests are unsophisticated and unlawful, very few deals are actually struck, and most completed projects appear to be criminal.
A good Washington talking point delivers zero content. A great Washington talking point sounds substantive… while delivering zero content.
In the spirit of honoring greatness, I’d like to call attention to the current White House position on cryptographic backdoors. It received its most public airing from President Obama, in a February 13 interview with RE/CODE.
“I’m a strong believer in strong encryption,” explained the President. “[T]here’s no scenario in which we don’t want really strong encryption.”
President Obama isn’t the only official invoking “strong encryption.” (And strongly, too.) In just about every recent conversation with an administration policymaker, I’ve been subjected to some version of the line.
According to law enforcement and intelligence agencies, encryption should come with a backdoor. It’s not a new policy position—it dates to the Crypto Wars of the 1990s—but it’s gaining new Beltway currency.
Cryptographic backdoors are a bad idea. They introduce unquantifiable security risks, like the recent FREAK vulnerability. They could equip oppressive governments, not just the United States. They chill free speech. They impose costs on innovators and reduce foreign demand for American products. The list of objections runs long.
I’d like to articulate an additional, pragmatic argument against backdoors. It’s a little subtle, and it cuts across technology, policy, and law. Once you see it, though, you can’t unsee it.
Cryptographic backdoors will not work. As a matter of technology, they are deeply incompatible with modern software platforms. And as a matter of policy and law, addressing those incompatibilities would require intolerable regulation of the technology sector. Any attempt to mandate backdoors will merely escalate an arms race, where usable and secure software stays a step ahead of the government.
The easiest way to understand the argument is to walk through a hypothetical. I’m going to use Android; much of the same analysis would apply to iOS or any other mobile platform.
I’m excited to be teaching Stanford Law’s first Coursera offering this fall, on government surveillance. In preparation, I’ve been extensively poking around the platform; while I found some snazzy features, I also stumbled across a few security and privacy issues.
- Any teacher can dump the entire user database, including over nine million names and email addresses.
- If you are logged into your Coursera account, any website that you visit can list your course enrollments.
- Coursera’s privacy-protecting user IDs don’t do much privacy protecting.
The balance of this piece provides some detail on each of the vulnerabilities.
Update 9/4: Coursera has acknowledged the issues, and claims they are “fully addressed.” The second vulnerability, however, still exists.
Update 9/6: Coursera appears to have imposed rate limiting on the APIs associated with the second vulnerability, mitigating the risk to users. A malicious website can now iterate over about 10% of the course catalog before having to wait.
Does the Fourth Amendment protect SSL keys? Not really, argues the executive branch in Lavabit’s appeal. “[A] business cannot prevent the execution of a search warrant by locking its front gate.”1
True enough. But a business does have a constitutional right to keep that gate intact. When executing a warrant, officers must ordinarily announce themselves and afford an opportunity to open up.
The National Security Agency works to circumvent cryptography. In the abstract, that’s hardly objectionable—legitimate intelligence targets may adopt security measures. Concerns arise, however, when the NSA subverts the technologies that ordinary consumers and businesses rely upon. Longstanding conventional wisdom in the computer security community has been that the NSA works to insert backdoors into crypto standards and security products, and that the agency hoards vulnerabilities in popular crypto algorithms and implementations. Widely read reports recently confirmed these views.
The go-to recommendation among many security experts has been deployment of additional protective measures. That’s an appealing near-term option for sophisticated users and companies. It’s largely impractical for ordinary users, however. And adding more crypto won’t restore damaged trust, shut potentially risky backdoors, or patch vulnerable systems.
Original at Freedom to Tinker.
Late last year the Obama administration reopened talks with Russia over the militarization of cyberspace and assented to cybersecurity discussion in the United Nations First Committee (Disarmament and National Security). My intention in this three-part series is to probe Russian and American foreign policy on cyberwarfare and advance the thesis that the Russians are negotiating for specific strategic or diplomatic gains, while the Americans are primarily procedurally invested owing to the “reset” in Russian relations and changing perceptions of cyberwarfare.
This first post rebuts the Russians’ purported rationale for talks: avoiding a security dilemma.
Original at Freedom to Tinker.
In a recent interview prominent antivirus developer Eugene Kaspersky decried the role of anonymity in cybercrime. This is not a new claim – it is touched on in the Commission on Cybersecurity for the 44th Presidency Report and Cybersecurity Act of 2009, among others – but it misses the mark. Any Internet design would allow anonymity. What renders our Internet vulnerable is primarily weakness of software security and authentication, not anonymity.