While traveling through Dulles Airport last week, I noticed an Internet oddity. The nearby AT&T hotspot was fairly fast—that was a pleasant surprise.
But the web had sprouted ads. Lots of them, in places they didn’t belong.
Last I checked, Stanford doesn’t hawk fashion accessories or telecom service.1 And it definitely doesn’t run obnoxious ads that compel you to wait.
Earlier today, the New York Times reported that the National Security Agency has secretly expanded its role in domestic cybersecurity. In short, the NSA believes it has authority to operate a warrantless, signature-based intrusion detection system—on the Internet backbone.1
Owing to the program’s technical and legal intricacies, the Times-ProPublica team sought my explanation of related primary documents.2 I have high confidence in the report’s factual accuracy.3
Since this morning’s coverage is calibrated for a general audience, I’d like to provide some additional detail. I’d also like to explain why, in my view, the news is a game-changer for information sharing legislation.
Over four years ago, Google launched a Chrome privacy extension. Keep My Opt-Outs arrived with a media splash, and it presently has over 400,000 users worldwide.1
It’s a top result on the Chrome Web Store,2 and it’s even endorsed by a faux celebrity.
Unfortunately, the Keep My Opt-Outs extension isn’t nearly as effective as Google claims. It hasn’t been updated for years, resulting in only half of the promised coverage. Keep My Opt-Outs also doesn’t work in Chrome’s private browsing mode, despite the user’s explicit permission.
If you’re currently running Keep My Opt-Outs, I’d encourage switching to Disconnect or Privacy Badger.3 Adblock, Adblock Plus, and Ghostery are also excellent privacy tools, when configured properly.
In this post, I’ll explain why Google emphasized the Keep My Opt-Outs extension, how the code works, and what went awry.
Verizon Wireless injects a unique header into customer web traffic. When the practice came to light last year, it was widely panned. Numerous security researchers pointed out that this “supercookie” could trivially be used to track mobile subscribers, even if they had opted out, cleared their cookies, or entered private browsing mode.1 But Verizon persisted, emphasizing that its own business model did not use the header for tracking.
Out of curiosity, I went looking for a company that was taking advantage of the Verizon header to track consumers. I found one—Turn, a headline Verizon advertising partner. They’re “bringing sexy back to measurement.”
When the National Security Agency collects data inside the United States, it’s regulated by the Foreign Intelligence Surveillance Act. There’s a degree of court supervision and congressional oversight.
When the agency collects data outside the United States, it’s regulated by Executive Order 12333. That document embodies the President’s inherent Article II authority to conduct foreign intelligence. There’s no court involvement, and there’s scant legislative scrutiny.
So, that’s the conventional wisdom. American soil: FISA. Foreign soil: EO 12333. Unfortunately, the legal landscape is more complicated.
In this post, I’ll sketch three areas where the NSA collects data inside the United States, but under Executive Order 12333. I’ll also note two areas where the NSA collects data outside the United States, but under FISA.
In the debates surrounding intelligence reform, many observers have made a critical assumption. If Congress doesn’t act by mid-2015, it goes, the NSA’s controversial phone metadata program will turn into a pumpkin. In this post, I’m going to sketch why that view is so common—and why, regrettably, the clock may not strike midnight.
Over the past couple of days, there’s been an outpouring of concern about Verizon’s advertising practices. Verizon Wireless is injecting a unique identifier into web requests, as data transits the network. On my phone, for example, here’s the extra HTTP header.1
After poring over Verizon’s related patents and marketing materials, here’s my rough understanding of how the header works.
I’m excited to be teaching Stanford Law’s first Coursera offering this fall, on government surveillance. In preparation, I’ve been extensively poking around the platform; while I found some snazzy features, I also stumbled across a few security and privacy issues.
- Any teacher can dump the entire user database, including over nine million names and email addresses.
- If you are logged into your Coursera account, any website that you visit can list your course enrollments.
- Coursera’s privacy-protecting user IDs don’t do much privacy protecting.
The balance of this piece provides some detail on each of the vulnerabilities.
Update 9/4: Coursera has acknowledged the issues, and claims they are “fully addressed.” The second vulnerability, however, still exists.
Update 9/6: Coursera appears to have imposed rate limiting on the APIs associated with the second vulnerability, mitigating the risk to users. A malicious website can now iterate over about 10% of the course catalog before having to wait.
Retail analytics is a fraught field. The premise is straightforward: enable brick-and-mortar stores to track their customers. The technology is straightforward, too: monitor broadcasts from shoppers’ smartphones. Privacy concerns have, however, put a damper on the nascent industry. Regulators, legislators, and advocacy groups have questioned the legitimacy of surreptitiously monitoring shoppers’ gadgets.
Last fall, Senator Schumer announced a grand bargain with retail analytics firms. They will be bound by a “Mobile Location Analytics Code of Conduct,” a set of voluntary practices intended to assuage privacy fears. The document has already been widely panned, both as a product of backroom dealing, and for providing little substantive protection to consumers.
One particular point of contention is how the industry proposes to preserve privacy through cryptography. This post explains the Code of Conduct’s crypto, and demonstrates how it can trivially be undone.
Co-authored by Patrick Mutchler.
Is telephone metadata sensitive? The debate has taken on new urgency since last summer’s NSA revelations; all three branches of the federal government are now considering curbs on access. Consumer privacy concerns are also salient, as the FCC assesses telecom data sharing practices.
President Obama has emphasized that the NSA is “not looking at content.” “[T]his is just metadata,” Senator Feinstein told reporters. In dismissing the ACLU’s legal challenge, Judge Pauley shrugged off possible sensitive inferences as a “parade of horribles.”
On the other side, a number of computer scientists have expressed concern over the privacy risks posed by metadata. Ed Felten gave a particularly detailed explanation in a declaration for the ACLU: “Telephony metadata can be extremely revealing,” he wrote, “both at the level of individual calls and, especially, in the aggregate.” Holding the NSA’s program likely unconstitutional, Judge Leon credited this view and noted that “metadata from each person’s phone ‘reflects a wealth of detail about her familial, political, professional, religious, and sexual associations.’”
This is, at base, a factual dispute. Is it easy to draw sensitive inferences from phone metadata? How often do people conduct sensitive matters by phone, in a manner reflected by metadata?