Security

On security matters, mostly digital.

A utility for disabling Windows 10

Sunday, June 19, 2016 

While there may be people who actually like Windows 10, there are also many people who aren’t interested in fully exposing every part of their digital life to for-profit mining as means of offsetting Microsoft’s declining profits in the desktop OS business, and if you’re one of those, fighting Microsoft’s truly viral (and malware) marketing techniques is quite a hassle.  It appears there may be an easier way.

Micrsoft has finally provided an “easy” way for people to turn off windows OS update (e.g. from 7.x or 8.x to 10) from happening automatically and without user intervention (and frequently in outright defiance of clear user intent because profits first!)

The short form for people who are comfortable with some of the internal workings of Windows is:

Search for "edit group policy" and open the editor then follow the selection cascade as:

Local Computer Policy -> Computer Configuration -> Administrative Templates 
-> windows Components -> Windows Update 
->> Turn off the upgrade to the latest version... ->> [x] enabled

The longer instructions are at this link: https://support.microsoft.com/en-us/kb/3080351

Easy, no?

I suggest doing this and then downloading and installing the following program.  It will pretty much do the same thing but it also checks to see if Microsoft has already kindly filled your hard disk with malware without your permission and offers to delete it:

https://www.grc.com/never10.htm

Note that my previous posts about removing specific “updates” are still relevant.  The above should prevent windows 10 from auto-updating but Microsoft has been pushing updates with “telemetry” to Win 7 and Win 8, which are also spyware and are tracking you and reporting your usage patterns back to Microsoft without telling you or asking you.

Welcome to the new economy: you’re the product.

Posted at 13:56:56 GMT-0700

Category: HowToPrivacySecurityTechnology

Turn off windows update now!

Monday, March 14, 2016 

If you haven’t already, turn off Windows update now.  Microsoft has recently started installing Windows 10 spyware without consent.  A good friend of mine had a bunch of systems at the company where he runs IT hacked by Microsoft over the weekend, which broke the certificate store for WPA-2 and thus their wifi connections.

To be clear, Windows 10 is spyware.  Microsoft has changed their business model from selling a product to selling data – your data – to whoever they want.  Windows 10 comes with a EULA that gives them the right to steal everything on your computer – your email, your private pictures, your home movies, your love letters, your medical records, your financial records – anything they want without telling you.  “If you’re not paying for the product, you are the product.

If this happens to you,  I suggest contacting your state attorney general and filing a complaint against Microsoft.  Hopefully a crushing class action suit or perhaps jail time for the executives that dreamed up this massive heist will help deter future corporate data thieves, though that’s certainly irrational optimism.

I wish I could recommend switching to Linux for everyone, but there’s a lot of software that still depends on Windows and a lot of users that will have a hard time migrating (developers: please stop developing for Microsoft).  Apple seems unequivocally better in refusing to act as key player in bringing about Total Information Awareness.  I’m not a huge fan of their walled garden and computers as overpriced fashion accessories approach, but it is far better than outright theft.  For those that are slightly computer savvy, there’s Linux Mint, which is quite usable and genuinely free.

These instructions might help prevent that disaster of an update being visited upon you (and possibly law enforcement visits to come after Microsoft starts sifting through all your datas and forwarding on whatever they find).  The latest reports suggest they aren’t enough, but it is the best I have found other than isolating your windows box from the internet completely.

Posted at 14:27:03 GMT-0700

Category: NegativePrivacySecurityTechnology

PGP Usability Regression thanks to Enigmail

Thursday, February 25, 2016 

The latest auto update to Enigmail, the essential plugin for Thunderbird for encrypted mail, is a fairly dynamic project that occasionally makes UI and usability decisions that not everyone agrees with.

The latest is a problem for me.  I use K9 for mobile mail and K9 doesn’t support PGP/MIME, but Enigmail just:

enigmail-bad-mime

Why?  OK – PGP/MIME leaks less metainformation than inline PGP, but at the expense of compatibility.  K9 should support PGP/MIME, but it doesn’t.  Enigmail should have synchronized with K9 and released PGP/MIME when mobile users could use it.

But encryption people often insist that the only use case that matters is some edge case they think is critical.  They like to say that nobody should read encrypted mail on a mobile device because the baseband of the device is intrinsically insecure (all cell phones are intrinsically insecure – phones should treat the data radio as a serial modem and the OS and the data modem should interact only over a very simple command set – indeed, the radio should be a replaceable module, but that gets beyond this particular issue).

For now, make sure your default encoding is Inline-PGP or you’ll break encryption.   Encryption only works if it is easy to use and universally available. When people can’t read their messages, they just stop using it.  This isn’t security, this is a mistake.

Posted at 01:52:42 GMT-0700

Category: Cell phonesPrivacySecurityTechnology

Signal Desktop: Probably a good thing

Tuesday, December 8, 2015 

Signal is an easy to use chat tool that competes (effectively) with What’sApp or Viber. They’ve just released a desktop version which is being “preview released/buzz generating released.”  It is developed by a guy with some cred in the open source and crypto movement, Moxie Marlinspike.  I use it, but do not entirely trust it.

I’m not completely on board with Signal.  It is open source, and so in theory we can verify the code.  But there’s some history I find disquieting.  So while I recommend it as the best, easiest to use, (probably) most secure messaging tool available, I do so with some reservations.

  • It originally handled encrypted SMS messages.  There is a long argument about why they broke SMS support on the mailing lists.  I find all of the arguments Whisper Systems made specious and unconvincing and cannot ignore the fact that the SMS tool sent messages through the local carrier (Asiacell, Korek, or Zain here).  Breaking that meant secure messages only go through Whisper Systems’ Google-managed servers where all metadata is captured and accessible to the USG. Since it was open source, that version has been forked and is still developed, I use the SMSSecure fork myself
  • Signal has captured all the USG funding for messaging systems.  Alternatives are not getting funds.  This may make sense from a purely managerial point of view, but also creates a single point of infiltration.  It is far easier to compromise a single project if there aren’t competing projects.   Part of the strength of Open Source is only achieved when competing development teams are trying to one up each other and expose each other’s flaws (FreeBSD and OpenBSD for example).  In a monoculture, the checks and balances are weaker.
  • Signal has grown more intimate with Google over time.  The desktop version sign up uses your “google ID” to get you in the queue.  Google is the largest commercial spy agency in the world, collecting more data on more people than any other organization except probably the NSA.  They’re currently an advertising company and make their money selling your data to advertisers, something they’re quite disingenuous about, but the data trove they’ve built is regularly mined by organizations with more nefarious aims than merely fleecing you.

What to do?  Well, I use signal.  I’m pretty confident the encryption is good, or at least as good as anything else available.  I know my metadata is being collected and shared, but until Jake convinces Moxie to use anonymous identifiers for accounts and message through Tor hidden nodes, you have to be very tech savvy to get around that and there’s no Civil Society grants going to any other messaging services using, for example, an open standard like a Jabber server on a hidden node with OTR.

For now, take a half step up the security ladder and stop using commercial faux security (or unverifiable security, which is the same thing) and give Signal a try.

Maybe at some later date I’ll write up an easy to follow guide on setting up your own jabber server as a tor hidden service and federating it so you can message securely, anonymously, and keep your data (meta and otherwise) on your own hardware in your own house, where it still has at least a little legal protection.

 

Posted at 10:21:22 GMT-0700

Category: PositivePrivacyReviewsSecurityTechnology

10 Gbyte Win10 Spyware “upgrade” now forced on users

Sunday, September 27, 2015 

Microsoft has, historically, done some amazingly boneheaded things like clippy, Vista, Win 8, and Win 10.  They have one really good product: Excel, otherwise everything they’ve done has succeeded only through illegal exploitation of an aggressively defended monopoly. OK, maybe the Xbox is competitive, but I’m not much of a gamer.

Sadly for the world, the model of selling users for profit to advertisers and spies has gained ground to the point where Microsoft was starting to look like the least evil major entity in closed-source computing.  Poor microsoft.  To lose the evil crown must be at least as humiliating as their waning revenue and abject failures in the mobile space (so strange… try to enter a space where they don’t have a monopoly to force users to accept their mediocre crap and they fail, who’da thunk it?)

“There is a difference between policy and practice. We don’t read customers mail. We don’t read customer documents. We don’t triangulate YouTube views and searches. We don’t use the content of your Hotmail to target ads in Bing,”

Frank Shaw, Corporate Vice President of Corporate Communications for Microsoft

Well, never fear: Windows 10 is here and they’re radically one-upping the data theft economy by p0wning not just the data you idiotically entrust to someone else’s server for free without ever considering why they’re giving you that useful service for “free” or what they, or whoever buys their ultimately failed business, might do with your data, but also the data you consider too sensitive for the Google or the Apple.  Windows 10 exfiltrates all your data to Microsoft for their use and profit without your information.  Don’t believe it? Read their Privacy Statement.

Finally, we will access, disclose and preserve personal data, including your content (such as the content of your emails, other private communications or files in private folders), when we have a good faith belief that doing so is necessary.

And it is free (as in beer but not as in speech).  What could possiblay go wrong?

Well, people weren’t updating fast enough so Microsoft is now pushing that update on you involuntarily.  Do you have a data cap that a 10G download might break and cost you money?  So what!  Your loss!  Don’t have enough space on your drive for a 10G hidden folder of crapware foisted off on you without your permission?  Tough crap, Microsoft don’t care.

To be clear, Windows 10 is spyware.  If this was coming from a teenage hacker somewhere, they’d be facing jail time.  It is absolutely, unequivocally malware that will create a liability for you if you use it.  If you have any confidentiality requirement, you must not install windows 10.  Ever. Not even on your home machine.  Just don’t.

The only way to prevent this is really annoying and a little risky: disable automatic downloads.  One of the problems with Microsoft’s operating systems is the unbelievably crappy spaghetti code that results in a constant flow of cracks, a week’s worth are patched every Tuesday.  About 1 serious vulnerability every fortnight these days (note this is about the same as Ubuntu and about 1/4 the rate of OSX or iOS, why people think Apple products are “secure” is beyond me – live in that fantasy walled garden!  But nice logo you paid a 50% premium for on your shiny device). Not patching increases the risk that some hacker somewhere will steal your datas, but patching guarantees that Microsoft will steal your datas.  Keep your anti-virus up to date and live a little dangerously by keeping Microsoft out.

Here’s an interesting article: how-to-clean-the-windows-10-crapware-off-your-windows-7-or-81-pc

And a tool referenced in that article: GWX control panel (that can help remove the windows 10 infection if you got it).

And a list of patches I found that are related to Win10 malware that you can remove if you haven’t installed it yet (Windows 10 eliminates the ability to choose or selectively remove patches, once you’re in for the ride, you’re chained in: all or nothing.)

Basic advice:

  • Disable automatic updates and automatic downloads of updates.
  • Review each update Microsoft offers.  This is tedious, my win 7 install reports 384 updates, 5-10 a week, but other than security patches, you probably don’t really need them.  Only install a patch if there’s a reason.  Sorry, that sucks, but there’s always Linux Mint: free like beer AND free like speech.
  • If you’re still on Win 7/8, uninstall the spyware Microsoft has probably already installed.  If you’re on Windows 8, you probably want to upgrade to Windows 7 if at all possible.
  • If you succumbed to the pressure and became a Microsoft Product by installing Windows 10, uninstall it.
  • If uninstall doesn’t work, switch to Mint or reinstall 7.

Most importantly, if you develop software for servers or for end users, stop developing for Microsoft (and Apple too).  Respect the privacy of your customers by not exposing them to exploitation by desperate operating system vendors.  In many classes of applications, your customers buy their computers to run your software: they don’t care what operating system it requires – that should be transparent and painless.  Microsoft is no longer an even remotely acceptable choice.  Server applications should run under FreeBSD or OpenBSD and desktop applications should run under Linux.  You can charge more and generate more profit because the total net cost for your customers will be lower.  Split the difference and give them a more reliable, more secure, and lower cost environment and make more money doing so.

Posted at 08:07:54 GMT-0700

Category: FreeBSDHowToLinuxSecurityTechnology

The CA System is Intractably Broken

Tuesday, July 21, 2015 

I’m dealing with the hassle of setting up certs for a new site over the last few days. It means using startcom’s certs because they’re pretty good (only one security breach) and they have a decently low-hassle free certificate that won’t trigger BS warnings in browsers marketing fake cert mafia placebo security products to unwitting users. (And the CTO answers email within minutes well past midnight.)

And in the middle of this, news of another breach to the CA system was announced on the heels of Lenovo’s SuperFish SSL crack, this time a class break that resulted in a Chinese company being able to generate the equivalent of a lawful intercept cert and provided it to a private company. Official lawful intercept certificates are a globally used tool to silently crack SSL so official governments can monitor SSL encrypted traffic in compliance with national laws like the US’s CALEA.

(aww, someone liked this: https://news.ycombinator.com/item?id=5858538)

But this time, it went to a private company and they were using it to intercept and crack Google traffic, and Google found out. The absurdity is to presume that this is an infrequent event. Such breaches (and a “breach” isn’t a lawful intercept tool, which are in constant and widespread use globally, but such a tool in the “wrong” hands) happen regularly. There’s no data on the ratio of discovered breaches to undiscovered breaches, of course. While it is possible that they are always found, seemingly accidental discoveries suggest far wider misuse than generally acknowledged.

The cert mafia should be abolished. Certificate authorities work for authoritarian environments in which a single entity is trusted by fiat as in a dictatorship or a company. The public should trust public opinion and a tool like Perspectives would end these problems as well as significantly lower the barrier to a fully encrypted web as those of us trying to protect our traffic wouldn’t need to choose between forking over cash to the cert mafia for fake security or making our users jump through scary security messages and complex work-arounds.

Posted at 00:53:59 GMT-0700

Category: FreeBSDPrivacySecurityTechnology

A sad loss for security

Monday, July 20, 2015 

Whisper systems wrote the very useful TextSecure app for Android. It had a great feature of encrypting text messages, a standard communication modality in much of the world and one I rely on often. I have previously suggested it is a good tool.

The last “update” removed the ability to establish new encrypted chats over SMS and, it appears, the next will remove the function entirely. For me, this change substantially reduces the utility of the app.

Reading their arguments for doing so, I find myself disagreeing with their justifications. I understand there was some complexity in establishing encrypted SMS, but frankly initiating a one-time key exchange was about as easy as encrypted communication gets. That iOS users can’t play along is pretty irrelevant: iOS isn’t exactly the platform for secure communications anyway, you carry iOS devices when you want to impress people with your brand awareness, not get things done. That people occasionally end up with a conversation that is half-encrypted seems annoying but hardly all that problematic. The person that uninstalled the app will try to send messages in the clear, not the person who is still running it and a partial session. I can see the annoyance, but not any security leak.

I think the final result is somewhat dangerous. The first incarnation used SMS as the starting point, and once a secure communications were established, if available, coms moved transparently to the data channel. If not, it stayed with SMS. As I work in a place where data service is frequently disabled, this was the most reliable non-voice communication protocol.

Now SMS is unencrypted and data-mode communication is encrypted. You have to remember which is which and that is dangerous.

If they don’t restore encrypted SMS functionality, I will switch back to the standard SMS app, which is insecure SMS only and so not confusing and use chat secure or xabber for encrypted data communications so the difference is clear. You’re probably going to run a jabber-based chat tool anyway chat secure’s Tor integration makes it a better choice for data-mode chat while text secure no longer does anything particularly useful over the default app for SMS-mode nor anything particularly unique for data mode.

Posted at 00:53:41 GMT-0700

Category: Cell phonesSecurity

Making Chrome Less Horrible

Saturday, June 13, 2015 

Google’s Chrome is  a useful tool to have around, but the security features have gotten out of hand and make it increasingly useless for real work without actually improving security.

After a brief rant about SSL, there’s a quick solution at the bottom of this post.


 

Chrome’s Idiotic SSL Handling Model

I don’t like Chrome nearly as much as Firefox,  but it does do some things better (I have a persistent annoyance with pfSense certificates that cause slow loading of the pfSense management page in FF, for example). Lately I’ve found that the Google+ script seems to kill firefox, so I use Chrome for logged-in Google activities.

But Chrome’s handling of certificates is abhorrent.  I’ve never seen anything so resolutely destructive to security and utility.  It is the most ill-considered, poorly implemented, counter-productive failure in UI design and security policy I’ve ever encountered.  It is hateful and obscene.  A disaster.  An abomination. The ill-conceived excrement of ignorant twits.  I’d be happy to share my unrestrained feelings privately.

It is a private network, you idiots

I’ve discussed the problem before, but the basic issues are that:

  • The certificate authority is NOT INVALID, Chrome just doesn’t recognize it because it is self-signed.  There is a difference, dimwits.
  • This is a private network (10.x.x.x or 192.168.x.x) and if you pulled your head out for a second and thought about it, white-listing private networks is obvious.  Why on earth would anyone pay the cert mafia for a private cert?  Every web-interfaced appliance in existence automatically generates a self-signed cert, and Chrome flags every one of them as a security risk INCORRECTLY.
  • A “valid” certificate merely means that one of the zillions of cert mafia organizations ripping people off by pretending to offer security has “verified” the “ownership” of a site before taking their money and issuing a certificate that placates browsers
  • Or a compromised certificate is being used.
  • Or a law enforcement certificate is being used.
  • Or the site has been hacked by criminals or some country’s law enforcement.
  • etc.

A “valid” certificate doesn’t mean nothing at all, but close to it.

So one might think it is harmless security theater, like a TSA checkpoint: it does no real harm and may have some deterrent value.  It is a necessary fiction to ensure people feel safe doing commerce on the internet.  If a few percent of people are reassured by firm warnings and are thus seduced into consummating their shopping carts, improving ad traffic quality and thus ensuring Google’s ad revenue continues to flow, ensuring their servers continue sucking up our data, what’s the harm?

The harm is that it makes it hard to secure a website.  SSL does two things: it pretends to verify that the website you connect to is the one you intended to connect to (but it does not do this) and it does actually serve to encrypt data between the browser and the server, making eavesdropping very difficult.  The latter useful function does not require verifying who owns the server, which can only be done with a web of trust model like perspectives or with centralized, authoritarian certificate management.

How to fix Chrome:

The damage is done. Millions of websites that could be encrypted are not because idiots writing browsers have made it very difficult for users to override inane, inaccurate, misleading browser warnings.  However, if you’re reading this, you can reduce the headache with a simple step (Thanks!):

Right click on the shortcut you use to launch Chrome and modify the launch command by adding the following “--ignore-certificate-errors

Unfuck chrome a bit.

Once you’ve done this, chrome will open with a warning:

zomg: ignore certificate errors? who doesn't anyway?

YAY.  Suffer my ass.

Java?  What happened to Java?

Bonus rant

Java sucks so bad.  It is the second worst abomination loosed on the internet, yet lots of systems use it for useful features, or try to.  There’s endless compatibility problems with JVM versions and there’s the absolutely idiotic horror of the recent security requirement that disables setting “medium” security completely no matter how hard you want to override it, which means you can’t ever update past JVM 7.  Ever.  Because 8 is utterly useless because they broke it completely thinking they’d protect you from man in the middle attacks on your own LAN.

However, even if you have frozen with the last moderately usable version of Java, you’ll find that since Chrome 42 (yeah, the 42nd major release of chrome. That numbering scheme is another frustratingly stupid move, but anyway, get off my lawn) Java just doesn’t run in chrome.  WTF?

Turns out Google, happy enough to push their own crappy products like Google+, won’t support Oracle’s crappy product any more.  As of 42 Java is disabled by default.  Apparently, after 45 it won’t ever work again.  I’d be happy to see Java die, but I have a lot of infrastructure that requires Java for KVM connections, camera management, and other equipment that foolishly embraced that horrible standard.  Anyhow, you can fix it until 45 comes along…

To enable Java in Chrome for a little while longer, you can follow these instructions to enable NPAPI for chrome <42 (which enables Java).  Type “chrome://flags/#enable-npapi” in the browser bar and click “enable.”

Enable NAPI

Posted at 13:24:37 GMT-0700

Category: HowToSecurityTechnology

Superfish proves certs are useless for identification

Saturday, February 21, 2015 

Can we please, please stop with the stupid certificate verification warnings?

superfish logo

Dear security developers, your model is broken. It never worked. Stop warning people about certificate errors. Now. Forever.

Certificate errors serve two purposes:

  1. They make developers uncomfortable with using perfectly secure self-signed certs, and since commercial certs cost money, much of the web that could be encrypted remains unencrypted. That’s harm done to the public. Thanks.
  2. They happen so often, so relentlessly, for such trivial reasons (not even Google can keep their certs up to date) that users learn to ignore them, which makes an actual man-in-the-middle attack almost certain to succeed with most people, despite the warnings.

The Certificate Authority system is predicated on the idea that Certificate Authorities are flawless and trustworthy. They are neither. The Lenovo/Superfish problem shows another obvious flaw: hardware vendors (and actually any trusted software installer) has to be trustworthy too or client-side MITM is easy. And CA’s simply can’t verify against that.

This whole idiocy creates massive problems for something so basic as LAN administration. Even before wireless became pervasive, LAN coms should always be encrypted when passwords or any meaningful data is moving. Current security settings create a massive avalanche of useless errors for “untrustworthy certs” on one’s own network (the obvious fix is to automatically trust all certs on private networks, duh).

This is an issue that bothers me a lot. It gets in my way constantly and makes real security and encrypted communications way harder and way more complicated than it needs to be and the only beneficiaries at all are the certificate Mafiosi. This is just stupid. Superfish proves, again, how broken it is. Can we stop pretending now?

Also, this most recent of many certificate flaws comes with a bonus feature: the MITM cert Superfish uses is apparently really pathetically insecure, aside from using broken crypto, their software had their passwords in it, making it easy for crackers to develop tools to harvest additional data from the victims of the Superfish/Lenovo attack.It probably hurts more to find out your vendor hacked you, but the penalty is that the hack also destroyed the security of all of your communications. Thanks. This is why we can’t have nice things. It is also why any back door, no matter what the motive, compromises security.


 

Update: Superfish is, apparently, out of business.  While that sucks for the people at the company, who were probably very happy with their Lenovo OEM deal and instead got a big sock of coal, one might naively hope for an upside that companies considering a model based on stealing people’s data might take notice of the cautionary tale of superfish.

Unfortunately – that won’t happen, not in the current valley climate. While it is economically advantageous to hire cheap kids who have no life and will work long hours for meagre pay, they come with a downside: they are all ignorant idiots. I don’t mean they’re not smart or capable (though the smart barrel was long ago drained and the vast majority of brogrammers sauntering around SF really are stupid), rather that they are foolish as in the opposite of ‘wise.”  Wisdom comes from experience, and experience only comes with time, an immutable dimension.  This superfish debacle was only from Feb 2015, but this year’s batch of idiot brogrammers weren’t around to see it and as they gather in self-congratulatory clusters in posh, VC-funded collaborative spaces, company barrista-brewed latte in one hand and social-media-distraction feeding portable device in the other, they’ll be high-fiveing and fist-bumping the brilliance of their brand-new idea for getting around SSL so they can collect marketing data and better target advertising.  Yay.


 

How to fix Superfish:

Install Perspectives. And support them.

Also, this bugs the crap out of me:

Overthrow the Cert Mafia!

SSL for Authentication Sucks

Unbreaking Firefox SSL Behavior

The CA System is Intractably Broken

Posted at 02:45:41 GMT-0700

Category: SecurityTechnology

Sony-style Attacks and eMail Encryption

Friday, December 19, 2014 

Some of the summaries of the Sony attacks are a little despairing of the viability of internet security, for example Schneier:

This could be any of us. We have no choice but to entrust companies with our intimate conversations: on email, on Facebook, by text and so on. We have no choice but to entrust the retailers that we use with our financial details. And we have little choice but to use butt services such as iButt and Google Docs.

I respectfully disagree with some of the nihilism here: you do not need to put your data in the butt. Butt services are “free,” but only because you’re the product.  If you think you have nothing to hide and privacy is dead and irrelevant, you are both failing to keep up with the news and extremely unimaginative. You think you have no enemies?  Nobody would do you wrong for the lulz?  Nobody who would exploit information leaks for social engineering to rip you off?

Use butt services only when the function the service provides is predicated on a network effect (like Facebook) or simply can’t be replicated with individual scale resources (Google Search).  Individuals can reduce the risk of being a collateral target by setting up their own services like an email server, web server, chat server, file server, drop-box style server, etc. on their own hardware with minimal expertise (and the internet is actually full of really good and expert help if you make an honest attempt to try), or use a local ISP instead of relying on a global giant that is a global target.

Email Can be Both Secure AND Convenient:

But there’s something this Sony attack has made even more plain: eMail security is bad.  Not every company uses the least insecure email system possible and basically invites hackers to a data smorgasborg like Sony did by using outlook (I mean seriously, they can’t afford an IT guy who’s expertise extends beyond point-n-click?  Though frankly the most disappointing deployment of outlook is by MIT’s IT staff.  WTF?).

As lame as that is, email systems in general suffer from an easily remediated flaw: email is stored on the server in plain text which means that as soon as someone gets access to the email server, which is by necessity of function always globally network accessible, all historical mail is there for the taking.

Companies institute deletion policies where exposed correspondence is minimized by auto-deleting mail after a relatively short period, typically about as short as possible while still, more or less, enabling people to do their jobs.  This forced amnesia is a somewhat pathetic and destructive solution to what is otherwise an excellent historical resource: it is as useful to the employees as to hackers to have access to historical records and forced deletion is no more than self-mutilation to become a less attractive target.

It is trivial to create a much more secure environment with no meaningful loss of utility with just a few simple steps.

Proposal to Encrypt eMail at Rest:

I wrote in detail about this recently.  I realize it is a TLDR article, but as everyone’s wound up about Sony, a summary might serve as a lead-in for the more actively procrastinating. With a few very simple fixes to email clients (which could be implemented with a plug-in) and to email servers (which can be implemented via mail scripting like procmail or amavis), email servers can be genuinely secure against data theft.  These fixes don’t exist yet, but the two critical but trivial changes are:

Step One: Server Fix

  • Your mail server will have your public key on it (which is not a security risk) and use it to encrypt every message before delivering it to your mailbox if it didn’t come in already encrypted.

This means all the mail on the sever is encrypted as soon as it arrives and if someone hacks in, the store of messages is unreadable.  Maybe a clever hacker can install a program to exfiltrate incoming messages before they get encrypted, but doing this without being detected is very difficult and time consuming.  Grabbing an .ost file off some lame Windows server is trivial. I don’t mean to engage in victim blaming, but seriously, if you don’t want to get hacked, don’t go out wearing Microsoft.

Encrypting all mail on arrival is great security, but it also means that your inbox is encrypted and as current email clients decrypt your mail for viewing, but then “forget” the decrypted contents, encrypted messages are slower to view than unencrypted ones and, most crippling of all, you can’t search your encrypted mail.  This makes encrypted mail unusable, which is why nobody uses it after decades. This unusability is a tragic and pointless design flaw that originated to mitigate what was then, apparently, a sore spot with one of Phil’s friends who’s wife had read his correspondence with another woman and divorce ensued; protecting the contents of email from client-side snooping has ever since been perceived as critical.1I remember this anecdote from an early 1990’s version of PGP.  I may be mis-remembering it as the closest reference I can find is this FAQ:

It was a well-intentioned design constraint and has become a core canon of the GPG community, but is wrong-headed on multiple counts:

  1. An intimate partner is unlikely to need the contents of the messages to reach sufficient confidence in distrust: the presence of encrypted messages from a suspected paramour would be more than sufficient cause for a confrontation.
  2. It breaks far more frequent use such as business correspondence where operational efficiency is entirely predicated on content search which doesn’t work when the contents are encrypted.
  3. Most email compromises happen at the server, not at the client.
  4. Everyone seems to trust butt companies to keep their affairs private, much to the never-ending lulz of such companies.
  5. Substantive classes of client compromises, particularly targeted ones, capture keystrokes from the client, meaning if the legitimate user has access to the content of the messages, so too does the hacker, so the inconvenience of locally encrypted mail stores gains almost nothing.
  6. Server attacks are invisible to most users and most users can’t do anything about them.  Users, like Sony’s employees, are passive victims of sysadmin failures. Client security failures are the user’s own damn fault and the user can do something about them like encrypting the local storage of their device which protects their email and all their other sensitive and critical selfies, sexts, purchase records, and business correspondence at the same time.
  7. If you’re personally targeted at the client side, that some of your messages are encrypted provides very little additional security: the attacker will merely force you to reveal the keys.

Step Two: Client Fix

  • Your mail clients will decrypt your mail automatically and create local stores of unencrypted messages on your local devices.

If you’ve used GPG, you probably can’t access any mail you got more than a few days ago; it is dead to you because it is encrypted.  I’ve said before this makes it as useless as an ephemeral key encrypted chat but without the security of an ephemeral key in the event somebody is willing to force you to reveal your key and is interested enough to go through your encrypted data looking for something.  They’ll get it if they want it that bad, but you won’t be bothered.

But by storing mail decrypted locally and by decrypting mail as it is downloaded from the server, the user gets the benefit of “end-to-end encryption” without any of the hassles.

GPG-encrypted mail would work a lot more like an OTR encrypted chat.  You don’t get a message from OTR that reads “This chat message is encrypted, do you want to decrypt it?  Enter your password” every time you get a new chat, nor does the thread get re-encrypted as soon as you type something, requiring you to reenter your key to review any previous chat message.  That’d be idiotic.  But that’s what email does now.

Adoption Matters

These two simple changes would mean that server-side mail stores are secure, but just as easy to use and as accessible to clients as they are now.  Your local device security, as it is now, would be up to you.  You should encrypt your hard disk and use strong passwords because sooner or later your personal device will be lost or stolen and you don’t want all that stuff published all over the internet, whether it comes from your mail folder or your DCIM folder.

It doesn’t solve a targeted attack against your local device, but you’ll always be vulnerable to that and pretending that storing your encrypted email on your encrypted device in an encrypted form adds security is false security that has the unfortunate side effect of reducing usability and thus retarding adoption of real security.

If we did this, all of our email will be encrypted, which means there’s no additional hassle to getting mail that was encrypted with your GPG key by the sender (rather than on the server).  The way it works now, GPG is annoying enough to warrant asking people not to send encrypted mail unless they have to, which tags that mail as worth encrypting to anyone who cares.  By eliminating the disincentive, universally end-to-end encrypted email would become possible.

A few other minor enhancements that would help to really make end-to-end, universally encrypted email the norm include:

  • Update mail clients to prompt for key generation along with any new account (the only required option would be a password, which should be different from the server-log-in password since a hash of that has to be on the server and a hash crack of the account password would then permit decryption of the mail there, so UX programmers take note!)
  • Update address books, vcard, and LDAP servers so they expect a public key for each correspondent and complain if one isn’t provided or can’t be found.  An email address without a corresponding key should be flagged as problematic.
  • Corporate and hierarchical organizations should use a certificate authority-based key certification system, everyone else should use web-of-trust/perspectives style key verification, which can be easily automated to significantly reduce the risk of MitM attacks.

This is easy. It should have been done a long time ago.

 

Footnotes

Footnotes
1 I remember this anecdote from an early 1990’s version of PGP.  I may be mis-remembering it as the closest reference I can find is this FAQ:
Posted at 16:21:29 GMT-0700

Category: FreeBSDPrivacySecurityTechnology