Back to top

In 2008, three MIT students became infamous for hacking into the Boston transit system. Using software they wrote, they discovered a method to get free rides by adding credits to the subway passes. They had planned to announce their findings at DEF CON, an annual hacking conference held in Las Vegas, but a federal judge, at the request of the Massachusetts Bay Transportation Authority, barred them from giving the talk. Had they presented at the conference, the transit system would have incurred “significant damage,” the MBTA alleged. On the flipside, the injunction was unconstitutional, the students’ lawyer argued, a violation of First Amendment rights.

In these instances, it is unclear which party is correct. On one hand, the transit system could have lost revenue from people taking advantage of the freebie passes. On the other, scholars should be able to discuss their research without threat of litigation. It is difficult for vendors and researchers to agree on a way to responsibly disclose security vulnerabilities.

When someone discovers a flaw in software, he or she can choose one of several paths. The simplest of these paths is to do nothing, a less than ideal option, as it leaves the software in a state of vulnerability, waiting to be found or exploited by the next person. Another path is to notify the vendor, which is a good way to fix the flaw but also can unexpectedly lead to accusations of intrusion and lawsuits. Perhaps the riskiest path is to publicly divulge the findings through the media, a personal blog, or at DEF CON, like the MIT trio. Because every instance is unique, it is sometimes debatable what is the right thing to do, even if the intentions are good.

The CERT Coordination Center at Carnegie Mellon University recommends a delayed approach, in which security researchers and experts wait 45 days before disclosing their findings. But this deadline can be challenging to meet, even for large organizations. Imagine a company like Microsoft, with its tens of millions of customers using different versions of Windows. Any time Microsoft makes a security update, it has to verify that the update works on countless machine configurations and settings. Nevertheless, a delayed approach is perhaps the best compromise, as stated by their Coordinated Vulnerability Disclosure policy: “This serves everyone's best interests by ensuring that customers receive comprehensive, high-quality updates for security vulnerabilities, but are not exposed to malicious users while the update is being developed.”

Last October, The New Yorker, an 86-year-old publication, had to make two security updates to its website. The first was a change in its password policy. When users sign up for an account, the website used to set their passwords to the same value as their email addresses, making it simple to sign into another account. The update required them to reset many of their subscribers’ passwords and email out new ones. Not surprisingly, confusion ensued among those who tried logging in, and some even questioned the authenticity of the email, prompting the magazine to send a follow-up.

“Please be assured that it is in fact from The New Yorker,” they wrote, “and that we are taking steps to strengthen the security of the digital edition.”

A week later, there was another security blunder. A method was discovered that allowed users to read articles on the magazine’s website without paying. Many newspapers and magazines put up a paywall, essentially a login page, to generate revenue from online subscriptions. The sole purpose of a paywall is to let certain users in and keep others out, like a bouncer at a club. Unfortunately, The New Yorker’s did not work as expected.

Both security flaws were discovered not by a malicious hacker or security researchers, but instead by myself, a reader of The New Yorker. At the time, I was a student, writing for an online magazine that I created with a few of my journalism cronies. We were eager to publish the story in our magazine but also were cognizant of notifying Condé Nast, The New Yorker’s parent company, before we did anything. After several days of unresponsiveness, we decided to forge ahead by writing a draft and showing it to our professor, the magazine’s advisor. The article, he said, should not be like a brick that readers can use to throw through a storefront window, but rather be truthful without reading like an instruction manual.

He recommended that we remove certain details and had the school’s lawyers vet it over. After several rounds of edits and revisions, we published the story on the front page of our tiny magazine. In the first 12 hours, our website attracted some 3,900 visitors, which was more than all our previous traffic combined. We caught the attention of larger publications like the Wall Street Journal’s technology blog and the New York Observer. Advertising Age described our story as the “best piece of media writing no one at traditional publishing companies will read.” That proved to be false after we received a phone call. Apparently, someone at The New Yorker had read our story.

To characterize The New Yorker’s reaction as annoyed would have been an understatement. We had poked holes in their system and demonstrated that no one is infallible on the Internet, even if they are the darling of the magazine world. The part they found most irritating, however, was the reader’s inference that they were oblivious to our warnings, to which they deny ever having received. Then again, when you’re a magazine with a million subscribers, it’s not unusual to lose track of a few emails and calls every now and then.

We believed that we had made an effort in good faith following the policy of responsible disclosure. We gave them advanced warning. We excluded details that would allow others to exploit the vulnerability. We wanted to highlight the glaring problem of security. Given that the entire print industry seems to be migrating toward a paywall model, shouldn’t someone make sure that the technology actually works? David Remnick, The New Yorker’s editor, even said in an interview, “I was going to be damned if I was going to train 18-year-olds, 20-year-olds, 25-year-olds, that this is like water that comes out of the sink.”

Fortunately for us, they did not unleash their lawyers. Instead, their digital team opted to exchange emails and phone numbers, and after a few days, they plugged the paywall hole, making their content more secure.

As for the Boston transit authority, they eventually dismissed their lawsuit. “This is a great opportunity for both the MBTA and the MIT students,” they said in a prepared statement. “As we continue to research ways to improve the fare system for our customers, we appreciate the cooperative spirit demonstrated by the MIT students.”

 

Jesse Young

 is a recent graduate of Northwestern’s Medill School of Journalism. His 9 to 5 is software engineering, and 5 to 9 is journalism.

Add new comment

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.