In the sharing economy, you can hire a one-off driver (Uber), courier (Postmates), grocery shopper (Instacart), housekeeper (Homejoy), or just about any other variety of henchman (TaskRabbit). So, what about hiring a hacker?
That’s the premise of Hacker’s List, a website launched in November. Anyone can post or bid on a hacking project. Hacker’s List arranges secure communication and payment escrow.
An online black market is, to be sure, nothing new. The rise and fall of the Silk Road received extensive media coverage.
What’s unusual about Hacker’s List is that it, purportedly, isn’t a black market. The website is public, projects and bids are open (albeit pseudonymous), and the owner has identified himself. (He runs a small security firm in Denver.) Hacker’s List was even featured on the front page of the New York Times.
Out of curiosity, I decided to leverage this openness. Who tries to hire a hacker? Is the website as popular as its owner claims? Most importantly, does the website facilitate illegal transactions, or solely white hat hacking?
To answer these questions—and, admittedly, to procrastinate on my dissertation—I cobbled together a crawler. You can find the source on GitHub, and the crawl data on Google Docs.
Here’s the short version: most requests are unsophisticated and unlawful, very few deals are actually struck, and most completed projects appear to be criminal.
A good Washington talking point delivers zero content. A great Washington talking point sounds substantive… while delivering zero content.
In the spirit of honoring greatness, I’d like to call attention to the current White House position on cryptographic backdoors. It received its most public airing from President Obama, in a February 13 interview with RE/CODE.
“I’m a strong believer in strong encryption,” explained the President. “[T]here’s no scenario in which we don’t want really strong encryption.”
President Obama isn’t the only official invoking “strong encryption.” (And strongly, too.) In just about every recent conversation with an administration policymaker, I’ve been subjected to some version of the line.
According to law enforcement and intelligence agencies, encryption should come with a backdoor. It’s not a new policy position—it dates to the Crypto Wars of the 1990s—but it’s gaining new Beltway currency.
Cryptographic backdoors are a bad idea. They introduce unquantifiable security risks, like the recent FREAK vulnerability. They could equip oppressive governments, not just the United States. They chill free speech. They impose costs on innovators and reduce foreign demand for American products. The list of objections runs long.
I’d like to articulate an additional, pragmatic argument against backdoors. It’s a little subtle, and it cuts across technology, policy, and law. Once you see it, though, you can’t unsee it.
Cryptographic backdoors will not work. As a matter of technology, they are deeply incompatible with modern software platforms. And as a matter of policy and law, addressing those incompatibilities would require intolerable regulation of the technology sector. Any attempt to mandate backdoors will merely escalate an arms race, where usable and secure software stays a step ahead of the government.
The easiest way to understand the argument is to walk through a hypothetical. I’m going to use Android; much of the same analysis would apply to iOS or any other mobile platform.
Verizon Wireless injects a unique header into customer web traffic. When the practice came to light last year, it was widely panned. Numerous security researchers pointed out that this “supercookie” could trivially be used to track mobile subscribers, even if they had opted out, cleared their cookies, or entered private browsing mode.1 But Verizon persisted, emphasizing that its own business model did not use the header for tracking.
Out of curiosity, I went looking for a company that was taking advantage of the Verizon header to track consumers. I found one—Turn, a headline Verizon advertising partner. They’re “bringing sexy back to measurement.”
Earlier this week, the Ninth Circuit heard oral arguments in a challenge to the NSA’s phone metadata program. While watching, I noticed some quite misleading legal claims by the government’s counsel. I then reviewed last month’s oral arguments in the D.C. Circuit, and I spotted a similar assertion.
In both cases, the government attorney waved away constitutional concerns about medical and financial records. Congress, he suggested, has already stepped in to protect those files.
With respect to ordinary law enforcement investigations, that’s only slightly true. And with respect to nation security investigations, that’s really not right.
When the National Security Agency collects data inside the United States, it’s regulated by the Foreign Intelligence Surveillance Act. There’s a degree of court supervision and congressional oversight.
When the agency collects data outside the United States, it’s regulated by Executive Order 12333. That document embodies the President’s inherent Article II authority to conduct foreign intelligence. There’s no court involvement, and there’s scant legislative scrutiny.
So, that’s the conventional wisdom. American soil: FISA. Foreign soil: EO 12333. Unfortunately, the legal landscape is more complicated.
In this post, I’ll sketch three areas where the NSA collects data inside the United States, but under Executive Order 12333. I’ll also note two areas where the NSA collects data outside the United States, but under FISA.
In the debates surrounding intelligence reform, many observers have made a critical assumption. If Congress doesn’t act by mid-2015, it goes, the NSA’s controversial phone metadata program will turn into a pumpkin. In this post, I’m going to sketch why that view is so common—and why, regrettably, the clock may not strike midnight.
Over the past couple of days, there’s been an outpouring of concern about Verizon’s advertising practices. Verizon Wireless is injecting a unique identifier into web requests, as data transits the network. On my phone, for example, here’s the extra HTTP header.1
After poring over Verizon’s related patents and marketing materials, here’s my rough understanding of how the header works.
I’m excited to be teaching Stanford Law’s first Coursera offering this fall, on government surveillance. In preparation, I’ve been extensively poking around the platform; while I found some snazzy features, I also stumbled across a few security and privacy issues.
- Any teacher can dump the entire user database, including over nine million names and email addresses.
- If you are logged into your Coursera account, any website that you visit can list your course enrollments.
- Coursera’s privacy-protecting user IDs don’t do much privacy protecting.
The balance of this piece provides some detail on each of the vulnerabilities.
Update 9/4: Coursera has acknowledged the issues, and claims they are “fully addressed.” The second vulnerability, however, still exists.
Update 9/6: Coursera appears to have imposed rate limiting on the APIs associated with the second vulnerability, mitigating the risk to users. A malicious website can now iterate over about 10% of the course catalog before having to wait.
On Friday, President Obama signed a mobile phone unlocking bill into law. Some observers have taken to describing S. 517, the Unlocking Consumer Choice and Wireless Competition Act, as a permission slip for consumers. Here’s a sample:
The New York Times: “you will no longer be breaking the law if you unlock your cellphone”
The Los Angeles Times: “makes it legal once again for consumers to unlock their cellphones”
CNET: “makes unlocking a cell phone legal again”
Those explanations aren’t quite accurate. The new law (temporarily) shields consumers from the Digital Millennium Copyright Act. It is, by design, a narrow fix; it expressly leaves other sources of legal liability untouched. …