ISP

Sony-style Attacks and eMail Encryption

Friday, December 19, 2014 

Some of the summaries of the Sony attacks are a little despairing of the viability of internet security, for example Schneier:

This could be any of us. We have no choice but to entrust companies with our intimate conversations: on email, on Facebook, by text and so on. We have no choice but to entrust the retailers that we use with our financial details. And we have little choice but to use butt services such as iButt and Google Docs.

I respectfully disagree with some of the nihilism here: you do not need to put your data in the butt. Butt services are “free,” but only because you’re the product.  If you think you have nothing to hide and privacy is dead and irrelevant, you are both failing to keep up with the news and extremely unimaginative. You think you have no enemies?  Nobody would do you wrong for the lulz?  Nobody who would exploit information leaks for social engineering to rip you off?

Use butt services only when the function the service provides is predicated on a network effect (like Facebook) or simply can’t be replicated with individual scale resources (Google Search).  Individuals can reduce the risk of being a collateral target by setting up their own services like an email server, web server, chat server, file server, drop-box style server, etc. on their own hardware with minimal expertise (and the internet is actually full of really good and expert help if you make an honest attempt to try), or use a local ISP instead of relying on a global giant that is a global target.

Email Can be Both Secure AND Convenient:

But there’s something this Sony attack has made even more plain: eMail security is bad.  Not every company uses the least insecure email system possible and basically invites hackers to a data smorgasborg like Sony did by using outlook (I mean seriously, they can’t afford an IT guy who’s expertise extends beyond point-n-click?  Though frankly the most disappointing deployment of outlook is by MIT’s IT staff.  WTF?).

As lame as that is, email systems in general suffer from an easily remediated flaw: email is stored on the server in plain text which means that as soon as someone gets access to the email server, which is by necessity of function always globally network accessible, all historical mail is there for the taking.

Companies institute deletion policies where exposed correspondence is minimized by auto-deleting mail after a relatively short period, typically about as short as possible while still, more or less, enabling people to do their jobs.  This forced amnesia is a somewhat pathetic and destructive solution to what is otherwise an excellent historical resource: it is as useful to the employees as to hackers to have access to historical records and forced deletion is no more than self-mutilation to become a less attractive target.

It is trivial to create a much more secure environment with no meaningful loss of utility with just a few simple steps.

Proposal to Encrypt eMail at Rest:

I wrote in detail about this recently.  I realize it is a TLDR article, but as everyone’s wound up about Sony, a summary might serve as a lead-in for the more actively procrastinating. With a few very simple fixes to email clients (which could be implemented with a plug-in) and to email servers (which can be implemented via mail scripting like procmail or amavis), email servers can be genuinely secure against data theft.  These fixes don’t exist yet, but the two critical but trivial changes are:

Step One: Server Fix

  • Your mail server will have your public key on it (which is not a security risk) and use it to encrypt every message before delivering it to your mailbox if it didn’t come in already encrypted.

This means all the mail on the sever is encrypted as soon as it arrives and if someone hacks in, the store of messages is unreadable.  Maybe a clever hacker can install a program to exfiltrate incoming messages before they get encrypted, but doing this without being detected is very difficult and time consuming.  Grabbing an .ost file off some lame Windows server is trivial. I don’t mean to engage in victim blaming, but seriously, if you don’t want to get hacked, don’t go out wearing Microsoft.

Encrypting all mail on arrival is great security, but it also means that your inbox is encrypted and as current email clients decrypt your mail for viewing, but then “forget” the decrypted contents, encrypted messages are slower to view than unencrypted ones and, most crippling of all, you can’t search your encrypted mail.  This makes encrypted mail unusable, which is why nobody uses it after decades. This unusability is a tragic and pointless design flaw that originated to mitigate what was then, apparently, a sore spot with one of Phil’s friends who’s wife had read his correspondence with another woman and divorce ensued; protecting the contents of email from client-side snooping has ever since been perceived as critical.1I remember this anecdote from an early 1990’s version of PGP.  I may be mis-remembering it as the closest reference I can find is this FAQ:

It was a well-intentioned design constraint and has become a core canon of the GPG community, but is wrong-headed on multiple counts:

  1. An intimate partner is unlikely to need the contents of the messages to reach sufficient confidence in distrust: the presence of encrypted messages from a suspected paramour would be more than sufficient cause for a confrontation.
  2. It breaks far more frequent use such as business correspondence where operational efficiency is entirely predicated on content search which doesn’t work when the contents are encrypted.
  3. Most email compromises happen at the server, not at the client.
  4. Everyone seems to trust butt companies to keep their affairs private, much to the never-ending lulz of such companies.
  5. Substantive classes of client compromises, particularly targeted ones, capture keystrokes from the client, meaning if the legitimate user has access to the content of the messages, so too does the hacker, so the inconvenience of locally encrypted mail stores gains almost nothing.
  6. Server attacks are invisible to most users and most users can’t do anything about them.  Users, like Sony’s employees, are passive victims of sysadmin failures. Client security failures are the user’s own damn fault and the user can do something about them like encrypting the local storage of their device which protects their email and all their other sensitive and critical selfies, sexts, purchase records, and business correspondence at the same time.
  7. If you’re personally targeted at the client side, that some of your messages are encrypted provides very little additional security: the attacker will merely force you to reveal the keys.

Step Two: Client Fix

  • Your mail clients will decrypt your mail automatically and create local stores of unencrypted messages on your local devices.

If you’ve used GPG, you probably can’t access any mail you got more than a few days ago; it is dead to you because it is encrypted.  I’ve said before this makes it as useless as an ephemeral key encrypted chat but without the security of an ephemeral key in the event somebody is willing to force you to reveal your key and is interested enough to go through your encrypted data looking for something.  They’ll get it if they want it that bad, but you won’t be bothered.

But by storing mail decrypted locally and by decrypting mail as it is downloaded from the server, the user gets the benefit of “end-to-end encryption” without any of the hassles.

GPG-encrypted mail would work a lot more like an OTR encrypted chat.  You don’t get a message from OTR that reads “This chat message is encrypted, do you want to decrypt it?  Enter your password” every time you get a new chat, nor does the thread get re-encrypted as soon as you type something, requiring you to reenter your key to review any previous chat message.  That’d be idiotic.  But that’s what email does now.

Adoption Matters

These two simple changes would mean that server-side mail stores are secure, but just as easy to use and as accessible to clients as they are now.  Your local device security, as it is now, would be up to you.  You should encrypt your hard disk and use strong passwords because sooner or later your personal device will be lost or stolen and you don’t want all that stuff published all over the internet, whether it comes from your mail folder or your DCIM folder.

It doesn’t solve a targeted attack against your local device, but you’ll always be vulnerable to that and pretending that storing your encrypted email on your encrypted device in an encrypted form adds security is false security that has the unfortunate side effect of reducing usability and thus retarding adoption of real security.

If we did this, all of our email will be encrypted, which means there’s no additional hassle to getting mail that was encrypted with your GPG key by the sender (rather than on the server).  The way it works now, GPG is annoying enough to warrant asking people not to send encrypted mail unless they have to, which tags that mail as worth encrypting to anyone who cares.  By eliminating the disincentive, universally end-to-end encrypted email would become possible.

A few other minor enhancements that would help to really make end-to-end, universally encrypted email the norm include:

  • Update mail clients to prompt for key generation along with any new account (the only required option would be a password, which should be different from the server-log-in password since a hash of that has to be on the server and a hash crack of the account password would then permit decryption of the mail there, so UX programmers take note!)
  • Update address books, vcard, and LDAP servers so they expect a public key for each correspondent and complain if one isn’t provided or can’t be found.  An email address without a corresponding key should be flagged as problematic.
  • Corporate and hierarchical organizations should use a certificate authority-based key certification system, everyone else should use web-of-trust/perspectives style key verification, which can be easily automated to significantly reduce the risk of MitM attacks.

This is easy. It should have been done a long time ago.

 

Footnotes   [ + ]

1. I remember this anecdote from an early 1990’s version of PGP.  I may be mis-remembering it as the closest reference I can find is this FAQ:
Posted at 16:21:29 UTC

Category: FreeBSDPrivacySecuritytechnology

Xabber now uses Orbot: OTR+Tor

Sunday, November 3, 2013 

As of Sept 30 2013, Xabber added Orbot support. This is a huge win for chat security. (Gibberbot has done this for a long time, but it isn’t as user-friendly or pretty as Xabber and it is hard to convince people to use it).

The combination of Xabber and Orbot solves the three most critical problems in chat privacy: obscuring what you say via message encryption, obscuring who you’re talking to via transport encryption, and obscuring what servers to subpoena for at least the last information by onion routing. OTR solves the first and Tor fixes the last two (SSL solves the middle one too, though Tor has a fairly secure SSL ciphersuite, who knows what that random SSL-enabled chat server uses – “none?”)

There’s a fly in the ointment of all this crypto: we’ve recently learned a few entirely predictable (and predicted) things about how communications are monitored:

1) All communications are captured and stored indefinitely. Nothing is ephemeral; neither a phone conversation nor an email, nor the web sites you visit. It is all stored and indexed should somebody sometime in the future decide that your actions are immoral or illegal or insidious or insufficiently respectful this record may be used to prove your guilt or otherwise tag you for punishment; who knows what clever future algorithms will be used in concert with big data and cloud services to identify and segregate the optimal scapegoat population for whatever political crises is thus most expediently deflected. Therefore, when you encrypt a conversation it has to be safe not just against current cryptanalytic attacks, but against those that might emerge before the sins of the present are sufficiently in the past to exceed the limitations of whatever entity is enforcing whatever rules. A lifetime is probably a safe bet. YMMV.

2) Those that specialize in snooping at the national scale have tools that aren’t available to the academic community and there are cryptanalytic attacks of unknown efficacy against some or all of the current cryptographic protocols. I heard someone who should know better poo poo the idea that the NSA might have better cryptographers than the commercial world because the commercial world pays better, as if the obsessive brilliance that defines a world-class cryptographer is motivated by remuneration. Not.

But you can still do better than nothing while understanding that a vulnerability to the NSA isn’t likely to be an issue for many, though if PRISM access is already being disseminated downstream to the DEA, it is only a matter of time before politically affiliated hate groups are trolling emails looking for evidence of moral turpitude with which to tar the unfaithful. Any complacency that might be engendered by not being a terrorist may be short lived. Enjoy it while it lasts.

And thus (assuming you have an Android device) you can download Xabber and Orbot. Xabber supports real OTR, not the fake-we-stole-your-acronym-for-our-marketing-good-luck-suing-us “OTR” that Google hugger-muggers and caromshotts you into believing your chats are ephemeral with (of course they and all their intelligence and commercial data mining partners store your chats, they just make it harder for your SO to read your flirty transgressions). Real OTR is a fairly strong, cryptographically secured protocol that transparently and securely negotiates a cryptographic key to secure each chat, which you never know and which is lost forever when the chat is over. There’s no open community way to recover your chat (that is, the NSA might be able to but we can’t). Sure, your chat partner can screen shot or copy-pasta the chat, but if you trust the person you’re chatting with and you aren’t a target of the NSA or DEA, your chat is probably secure.

But there’s still a flaw. You’re probably using Google. So anyone can just go to Google and ask them who you were chatting with, for how long, and about how many words you exchanged. The content is lost, but there’s a lot of meta-data there to play with.

So don’t use gchat if you care about that. It isn’t that hard to set up a chat server.

But maybe you’re a little concerned that your ISP not know who you’re chatting with. Given that your ISP (at the local or national level) might have a bluecoat device and could easily be man-in-the-middling every user on their network simultaneously, you might have reason to doubt Google’s SSL connection. While OTR still protects the content of your chat, an inexpensive bluecoat device renders the meta information visible to whoever along your coms path has bought one. This is where Tor comes in. While Google will still know (you’re still using Google even after they lied to you about PRISM and said, in court, that nobody using Gmail has any reasonable expectation of privacy?) your ISP (commercial or national) is going to have a very hard time figuring out that you’re even talking to Google, let alone with whom. Even the fact that you’re using chat is obscured.

So give Xabber a try. Check out Orbot, the effortless way to run it over Tor. And look into alternatives to cloud providers for everything you do.

Posted at 08:50:47 UTC

Category: FreeBSDself-publishingtechnology

Ending Free Speech to Protect Obsolete Industries

Thursday, March 15, 2012 

In 1998 I gave a talk at DefCon 6 titled “Copyright vs. Free Speech,” the gist of which was that in order to protect the profits of the publishing industry in the face of technology which obsoleted centralized publishing, publishers had begun to buy from congress increasingly draconian legislation to protect their obsolete business model.

They have been patient and effective, implementing a long-term program of carefully designed disinformation to turn an effective method of promoting the progress of science and the useful arts into a weapon to aid extorting profit from the people’s commons.

Due to the success of their propaganda campaigns, it bears repeating that authors have no right to profit from their works. They do not “own” ideas. They, nor their assignees, have any right to control the reproduction, reuse, or dissemination of ideas and inventions once they’ve chosen to make them public. We the people have chosen to gift them with a temporary monopoly on commercial exploitation of their inventions as a mechanism the founding fathers thought would serve to maximize the availability of freely usable ideas in the public domain; that is, to promote the progress of science and the useful arts. A premise they have utterly perverted.

Any law which expands copyright steals from the public domain and gives exclusive right to commercial exploitation to the beneficiary. The copyright industry has been incredibly successful in bribing politicians into allowing them to graze their cattle in our public parks without any additional compensation to the public. Indeed, new copyright laws don’t even pay lip service to the public good and focus entirely on maximizing profits at the expense of the public domain. These legislative disasters are prima facia unconstitutional.

A new agreement, endorsed by the White House, effectively implements the dystopian warnings I gave in my 1998 talk: ISPs will begin to directly enforce copyright, acting as the muscle for the new business model the copyright industry has turned to now that publishing is obsolete: direct shakedowns.

It was clear in 1998 that if our economy was turning to the then touted model of an “information economy” (and not making things) it would become necessary to police the flow of information to block the unauthorized exchanges of ideas lest someone freely share an idea for which a private entity has been granted a monopoly and undermine the profitability of the imaginary economy.

Aside from the loss of privacy and retarding progress and the useful arts, a problematic consequence of the monitoring necessary to ensure the exchange of ideas is taxed is that citizens must always be monitored, and now directly and intrusively by their ISPs. Monitoring has a chilling effect on free speech as people are naturally disinclined to openly dissent, to only speak privately of ideas that challenge entrenched interests. Intrusive monitoring is an effective tool of totalitarianism by destroying the privacy in which informed dissent grows strong enough to overcome the entrenched.

There is no practical way to implement an effective monopoly enforcement scheme at the ISP level without active monitoring of every digital interaction, from every website visited to every message exchanged, lest one hide a privatized bit. ISP monitoring undermines the foundations of democracy, at least the significant portion which has migrated to digital forums. This is a massive implementation of the same monitoring technology and concepts used in Syria and China to control dissidents, applied here merely to further enrich a few petty plutocrats.

Posted at 19:10:48 UTC

Category: politicstechnology

Opting Out for Privacy

Friday, December 3, 2010 

There’s a great story at the wall street journal describing some of the techniques that are being used to track people on line that I found informative (as are the other articles listed in the series in the box below).  EFF is doing some good work on this; your browser configuration probably uniquely identifies you and thus every site you’ve ever visited (via data exchanges).  Unique information about you is worth about $0.00_1.  Collecting a few hundred million 1/10ths of a cent starts to add up and may end up raising your insurance premiums.

One of the more entertaining/disturbing tricks is to use “click jacking” to remotely enable a person’s webcam or microphone.  Is your computer or network running slowly? Maybe it is the video you’re inadvertently streaming back (and maybe you just have way too many tabs open…)

A few things you can do to improve your privacy include:

  • Opt out of Rapleaf. Rapleaf collects user information about you and ties it to your email address.  You have to opt out with each email address individually, which almost certainly confirms to them that all your email addresses belong to the same person.  You might want to use unique Tor sessions for each opt out if you don’t want them to get more information than they already have via the process.
  • Opt out at NAI. This is a one stop shop for the basic cookie tracking companies that are attempting to be semi-compliant with privacy requests.  If you enable javascript for the site (which would be disabled by default if you’re using scriptblocker) then you can opt out of all of them at once.  Presumably you have to return and opt out again every time a new company comes along.
  • Use Tor for anything sensitive.  If you care about privacy, learn about Tor.  It does slow browsing so you have to be very committed to use it for everything.  But the browser plug in makes it pretty easy to turn it on for easy browsing.
  • Don’t use IE for anything personal or important.
  • Run SpyBot Search and Destory regularly.  Spybot helps block BHOs and toolbars that seem to proliferate automagically and helps remove tracking cookies.  You’ll be amazed at how many are installed on your system.  I have used or not used TeaTimer.  I’m less excited about having a lot of background tools, even helpful ones than I used to be.  Spybot currently starts out looking for 1,359,854 different known spywares.  Yikes.
  • Check what people know about you:  Google will tell you, so will Yahoo.  Spooky.
  • Use firefox.  If for no other reason than the following plugins (personally, it is my favorite, but I know people who favor chrome or even rockmelt, but talk about tracking!)  Just don’t use IE.
  • Use the private browsing mode in your browser (CTRL-SHIFT-P in FireFox).  It’d be nice if you could enable non-private browsing on a whitelist basis for sites you either trust or have to trust.  We’ll get there eventually…
  • TACO should help block flash cookies.
  • Install noscript to block scripts by default.  You can add all your favorite sites as you go so things work.  It is a pain in the ass for a while, but security requires vigilance.
  • Install adblock plus.  It helps keep the cookies away.    It also reduces ad annoyance.  You can enable ads for your favorite sites so they can pay their colo fees.
  • Add HTTPS Everywhere from EFF. The more your connections to sites are encrypted, the less your ISP (and others) can see about what you’re doing while you’re there.  Your ISP still knows every site you visit, and probably sells that information, but if your sessions are encrypted they don’t see the actual text you type.  It also makes it harder for script kiddies to grab your passwords at the cafe.
Posted at 02:44:43 UTC

Category: politicstechnology