Google

10 Gbyte Win10 Spyware “upgrade” now forced on users

Sunday, September 27, 2015 

Microsoft has, historically, done some amazingly boneheaded things like clippy, Vista, Win 8, and Win 10.  They have one really good product: Excel, otherwise everything they’ve done has succeeded only through illegal exploitation of an aggressively defended monopoly. OK, maybe the Xbox is competitive, but I’m not much of a gamer.

Sadly for the world, the model of selling users for profit to advertisers and spies has gained ground to the point where Microsoft was starting to look like the least evil major entity in closed-source computing.  Poor microsoft.  To lose the evil crown must be at least as humiliating as their waning revenue and abject failures in the mobile space (so strange… try to enter a space where they don’t have a monopoly to force users to accept their mediocre crap and they fail, who’da thunk it?)

“There is a difference between policy and practice. We don’t read customers mail. We don’t read customer documents. We don’t triangulate YouTube views and searches. We don’t use the content of your Hotmail to target ads in Bing,”

Frank Shaw, Corporate Vice President of Corporate Communications for Microsoft

Well, never fear: Windows 10 is here and they’re radically one-upping the data theft economy by p0wning not just the data you idiotically entrust to someone else’s server for free without ever considering why they’re giving you that useful service for “free” or what they, or whoever buys their ultimately failed business, might do with your data, but also the data you consider too sensitive for the Google or the Apple.  Windows 10 exfiltrates all your data to Microsoft for their use and profit without your information.  Don’t believe it? Read their Privacy Statement.

Finally, we will access, disclose and preserve personal data, including your content (such as the content of your emails, other private communications or files in private folders), when we have a good faith belief that doing so is necessary.

And it is free (as in beer but not as in speech).  What could possiblay go wrong?

Well, people weren’t updating fast enough so Microsoft is now pushing that update on you involuntarily.  Do you have a data cap that a 10G download might break and cost you money?  So what!  Your loss!  Don’t have enough space on your drive for a 10G hidden folder of crapware foisted off on you without your permission?  Tough crap, Microsoft don’t care.

To be clear, Windows 10 is spyware.  If this was coming from a teenage hacker somewhere, they’d be facing jail time.  It is absolutely, unequivocally malware that will create a liability for you if you use it.  If you have any confidentiality requirement, you must not install windows 10.  Ever. Not even on your home machine.  Just don’t.

The only way to prevent this is really annoying and a little risky: disable automatic downloads.  One of the problems with Microsoft’s operating systems is the unbelievably crappy spaghetti code that results in a constant flow of cracks, a week’s worth are patched every Tuesday.  About 1 serious vulnerability every fortnight these days (note this is about the same as Ubuntu and about 1/4 the rate of OSX or iOS, why people think Apple products are “secure” is beyond me – live in that fantasy walled garden!  But nice logo you paid a 50% premium for on your shiny device). Not patching increases the risk that some hacker somewhere will steal your datas, but patching guarantees that Microsoft will steal your datas.  Keep your anti-virus up to date and live a little dangerously by keeping Microsoft out.

Here’s an interesting article: how-to-clean-the-windows-10-crapware-off-your-windows-7-or-81-pc

And a tool referenced in that article: GWX control panel (that can help remove the windows 10 infection if you got it).

And a list of patches I found that are related to Win10 malware that you can remove if you haven’t installed it yet (Windows 10 eliminates the ability to choose or selectively remove patches, once you’re in for the ride, you’re chained in: all or nothing.)

Basic advice:

  • Disable automatic updates and automatic downloads of updates.
  • Review each update Microsoft offers.  This is tedious, my win 7 install reports 384 updates, 5-10 a week, but other than security patches, you probably don’t really need them.  Only install a patch if there’s a reason.  Sorry, that sucks, but there’s always Linux Mint: free like beer AND free like speech.
  • If you’re still on Win 7/8, uninstall the spyware Microsoft has probably already installed.  If you’re on Windows 8, you probably want to upgrade to Windows 7 if at all possible.
  • If you succumbed to the pressure and became a Microsoft Product by installing Windows 10, uninstall it.
  • If uninstall doesn’t work, switch to Mint or reinstall 7.

Most importantly, if you develop software for servers or for end users, stop developing for Microsoft (and Apple too).  Respect the privacy of your customers by not exposing them to exploitation by desperate operating system vendors.  In many classes of applications, your customers buy their computers to run your software: they don’t care what operating system it requires – that should be transparent and painless.  Microsoft is no longer an even remotely acceptable choice.  Server applications should run under FreeBSD or OpenBSD and desktop applications should run under Linux.  You can charge more and generate more profit because the total net cost for your customers will be lower.  Split the difference and give them a more reliable, more secure, and lower cost environment and make more money doing so.

Posted at 08:07:54 GMT-0700

Category: FreeBSDHowToLinuxSecurityTechnology

Microsoft Spyware Now Being Installed On Win 7

Monday, August 24, 2015 

If you’re the sort of person who isn’t entirely happy about the idea of Microsoft claiming the right to copy your personal files, photos, emails, chat logs, diary entries, medical records, etc over to their own servers to sell to whoever they want for whatever they can get for your personal data – into markets that already exist for insurance companies to deny you insurance based on algorithmic analysis of your habits or your friends habits or for financial institutions to set your interest rates based on similar criterion, or perhaps even for law enforcement to investigate you without a warrant, then OBVIOUSLY you would never, ever install Windows 10 under any circumstances.

Well, Microsoft seems to have fully jumped on the Google/Facebook gravy train and is now completely invested in stealing your data and selling it to the highest bidder (Apple has been exfiltrating your data for a long time, but so far for internal use).  I’ve become more suspect of Microsoft’s updates since they made the Windows 10 advertisement an important (not optional) update (important for what? their bottom line, obviously).  Turns out that the latest updates to Windows 7 are pushing Microsoft’s new business model of stealing your data for profit to Windows 7 and 8.

Staying safe is going to require ever more vigilance.  It may be possible to block windows components from reaching out to microsoft’s servers at the personal firewall level and certainly it can be done at the corporate firewall level (and should be), but blocking Microsoft is a somewhat complex issue.  You can’t run Windows safely without installing security patches because the underlying OS is so completely insecure that new, critical, exploitable flaws are discovered every single week.  If you don’t constantly patch these security failures, you will be hacked by people other than microsoft.  If you install the wrong microsoft patch, you will be hacked by microsoft.  Debian anyone? Also, software developers developing enterprise software, please, please, please stop developing for that horrible, insecure, performance hobbling abomination of a tarted-up single-user OS “Server” and focus on a secure, stable server OS like FreeBSD.  Please.  I hate, hate having to fork over $1k to microsoft for each box to run their horrible OS just so I can run your software.  Why do you support that extortion? Do you despise your customers that much? Stop.

If you care about corporate governance and data security or HIPAA compliance, you are probably violating some critical requirements by installing windows 10 or these new updates to your existing Win7/8 base if you do not block data exfiltration to Microsoft’s servers.  This is spyware.  These updates are stealing your data and sending it to Microsoft.  If your business is subject to data privacy laws, these updates put you in violation of those laws.  Microsoft is doing something that is extremely significant and extremely evil and completely wrong.  Take action or you may very well be facing personal or corporate consequences.  srsly.

I am a strong believer in data privacy and extremely suspect of what I consider highly disingenuous business practices like Google’s but I recognize that there are reasonable people out there who think Google isn’t evil.  However, this windows 10 issue, now being pushed to windows 7, goes well beyond Google taking advantage of people’s historical assumptions about the security of email to offer them a free look-alike honey trap to gather their data.  Windows 10 and these Win 7 updates are intrusive, not merely misleading.  Do not update.  Srsly.  Do not update.  Block the spyware “hotfixes.”

Stop Gap Fixes

In researching these updates, I came across this article on techworm that has a nice summary of the Malware updates Microsoft is pushing out (with some additional amendments I found):

With a whiff of irony, this google search “telemetry site:https://support.microsoft.com/en-us/kb” shows these patches and many more…

Do not automatically install Microsoft updates.  You must turn that feature off or you will keep getting additional spyware installed.  Go to windows update and verify your settings.  I have mine set so windows downloads the updates (so the updates are waiting locally), but I don’t let windows install them automatically.  That gives me a chance to review the updates and look for spyware.

windows_update_settings

When you get updates, you now have to check each one of them to find out if it is spyware or not.  The list above is current as far as I know, but clicking on the “more information” link to the right of the updates list will get you microsoft’s marketing speak obfuscation of the true purpose.  Any update that “adds telemetry points” or something like that is spyware.  Uncheck the install and hide the update.  Note that some of these were moved from “optional” to “important.”  Microsoft is absolutely intent on stealing your data and is taking some pretty underhanded steps to make it difficult for you to avoid it.

block_microsoft_spyware

 

If updates get past you or it turns out later that a seemingly important or innocuous update was spyware (the fun part is that you now have to be vigilant and look all this stuff up), then you can uninstall them from the “installed updates” control panel.

uninstall_microsoft_spyware

Work to be done

I’ll start looking into firewall settings to block communication to microsoft’s servers.  This is a standard anti-malware technique and should work here, except that microsoft has so many servers it is more challenging to block them than your typical malware botnet.

We need something like a variant of Peer Guardian to block microsoft’s servers using the standard P2P crowd-sourcing model to keep the list up to date. I’m not aware of anything like this yet, but I’m looking.  Microsoft has become more of an enemy to privacy than the RIAA ever was.

UPDATE:  this superuser answer includes a list of telemetry endpoints to block at your firewall or router.  Alternatively you can edit your hosts file and add these entries from DSL reports.

Larger Significance

This shift in business focus by Microsoft from providing a product people are willing to pay for to stealing data from people to sell on the commercial market has some significant lessons for the entire software model.

It isn’t just that Microsoft is now adopting Google’s business model of giving away “free” goodies as traps to collect product (you) to sell to the highest bidder, but that the model of corporate trust that underpins most of the security assumptions the internet is built on is manifestly false and unsustainable.  If any hacker tried to create these spyware updates, locked-down computers that only install signed code would refuse to install them.  Ignoring for the moment that the signed code model is idiotically flawed as signing keys are stolen all the time, this microsoft spyware is properly signed with legitimate keys.  It will be installed on locked down computers without complaint and will not show up in commercial anti-virus software.  But it is spyware.  It contains keyloggers and extremely productive data exfiltration code that is currently copying wholesale data dumps from unfortunate victims to Microsoft’s servers in such volume that their data caps are being hit.

If a non-commercial third party (e.g. “hacker”) did this, they’d be prosecuted.  It makes no difference to you that your data is being stolen by Microsoft rather than by some clever teenager in a former eastern block country: your data is being stolen.  But the model that has been promoted, a model of centralized corporate trust to validate the “security” of your system has been utterly and irrevocably shattered.  This isn’t an accident, isn’t something that better data management might have prevented, this is an intentional ex post facto rewrite of the usual, customary, and regular assumptions we have about the privacy of our computer systems and one that significantly impacts the security of almost everyone in the world: military, medical, legal, fiduciary, as well as personal.

And even if you trust Microsoft (for whatever bizarre, irrational reason), Microsoft is creating a whole series of security holes in their already crappy and insecure operating system that will be exploited by third parties.  By adding keyloggers and data exfiltration tools to the core OS, they’re making it even easier for non-corporate hackers to jump on the data theft gravy train. Everyone profits but you. You lose.

Posted at 04:19:18 GMT-0700

Category: PrivacyTechnology

Windows 10 Privacy Annihilator

Tuesday, August 4, 2015 

Why would Microsoft, a company whose revenue comes entirely from sales of Windows and Office, start giving Windows 10 away – not just giving it away, but foisting it on users with unbelievably annoying integrated advertisements in the menu of Win 7/8 that pop up endlessly and are tedious to remove and reinstall themselves constantly?

Have they just gone altruistic?  Decided that while they won’t make software free like speech, they’ll make it free like beer? Or is there something more nefarious going on? Something truly horrible, something that will basically screw over the entire windows-using population and sell them off like chattel to any bidder without consent or knowledge?

Of course, it is the latter.

Microsoft is a for-profit company and while their star has been waning lately and they’ve basically ceded the evil empire mantle to Apple, they desperately want to get into the game of stealing your private information and selling it to whoever is willing to pay.

So that’s what Windows 10 does.  It enables Microsoft to steal all of your information, every email, photo, or document you have on your computer and exfiltrate it silently to Microsoft’s servers, and to make it legal they have reserved the right to give it to whoever they want.  This isn’t just the information you stupidly gifted to Google by being dumb enough to use Gmail or ignorantly gifted to Apple by being idiotic enough to load into the iButt, but the files you think are private, on your computer, the ones you don’t upload.  Microsoft gets those.

Finally, we will access, disclose and preserve personal data, including your content (such as the content of your emails, other private communications or files in private folders), when we have a good faith belief that doing so is necessary.

They’ll “access” your data and “disclose” it (meaning to a third party) whenever they have a good faith belief that doing so is necessary.  No warrant needed.  It is necessary for Microsoft to make a buck, so if a  buck is offered for your data, they’re gonna sell it.

If you install Windows 10, you lose. So don’t. If you need to upgrade your operating system, it is time to switch to something that preserves Free like speech: Linux Mint is probably the best choice.

If you’re forced to run Windows 10 for some reason and can’t upgrade to windows 7, then follow these instructions (and these) and remain vigilant, Microsoft’s new strategy is to steal your data and sell it via any backdoor they can sneak past you. Locking them down is going to be a lot of work and might not be possible so keep an eye out for your selfies showing up on pr0n sites: they pay for pix and once you install Windows 10, Microsoft has every right to sell yours.


 

Update: you can’t stop windows 10 from stealing your private data

That’s not quite true – if you never connect your computer to a network, it is very unlikely that Microsoft will be able to secretly exfiltrate your private data through the Windows 10 trojan.  However, it turns out that while the privacy settings do reduce the amount of data that gets sent back to Microsoft, they continue to steal your data even though you’ve told them not to.

Windows 10 is spyware.  It is not an operating system, it is Trojan malware masquerading as an operating system that’s true purpose is to steal your data so Microsoft can sell it without your consent.  If you install Windows 10, you are installing spyware.

Win 10 has apparently been installed 65 million times.  That’s more than 3x as many users’ most intimate, most private data stolen as by the Ashley Madison attack.  If you value privacy, if the idea that you might be denied a loan or insurance because of secret data stolen from your computer without your consent bothers you, if the idea of having evidence of your potential crimes shared with law enforcement without your knowledge and without a warrant worries you then do not install windows 10.  Ever.

Posted at 11:00:30 GMT-0700

Category: PrivacyTechnology

The CA System is Intractably Broken

Tuesday, July 21, 2015 

I’m dealing with the hassle of setting up certs for a new site over the last few days. It means using startcom’s certs because they’re pretty good (only one security breach) and they have a decently low-hassle free certificate that won’t trigger BS warnings in browsers marketing fake cert mafia placebo security products to unwitting users. (And the CTO answers email within minutes well past midnight.)

And in the middle of this, news of another breach to the CA system was announced on the heels of Lenovo’s SuperFish SSL crack, this time a class break that resulted in a Chinese company being able to generate the equivalent of a lawful intercept cert and provided it to a private company. Official lawful intercept certificates are a globally used tool to silently crack SSL so official governments can monitor SSL encrypted traffic in compliance with national laws like the US’s CALEA.

(aww, someone liked this: https://news.ycombinator.com/item?id=5858538)

But this time, it went to a private company and they were using it to intercept and crack Google traffic, and Google found out. The absurdity is to presume that this is an infrequent event. Such breaches (and a “breach” isn’t a lawful intercept tool, which are in constant and widespread use globally, but such a tool in the “wrong” hands) happen regularly. There’s no data on the ratio of discovered breaches to undiscovered breaches, of course. While it is possible that they are always found, seemingly accidental discoveries suggest far wider misuse than generally acknowledged.

The cert mafia should be abolished. Certificate authorities work for authoritarian environments in which a single entity is trusted by fiat as in a dictatorship or a company. The public should trust public opinion and a tool like Perspectives would end these problems as well as significantly lower the barrier to a fully encrypted web as those of us trying to protect our traffic wouldn’t need to choose between forking over cash to the cert mafia for fake security or making our users jump through scary security messages and complex work-arounds.

Posted at 00:53:59 GMT-0700

Category: FreeBSDPrivacySecurityTechnology

Making Chrome Less Horrible

Saturday, June 13, 2015 

Google’s Chrome is  a useful tool to have around, but the security features have gotten out of hand and make it increasingly useless for real work without actually improving security.

After a brief rant about SSL, there’s a quick solution at the bottom of this post.


 

Chrome’s Idiotic SSL Handling Model

I don’t like Chrome nearly as much as Firefox,  but it does do some things better (I have a persistent annoyance with pfSense certificates that cause slow loading of the pfSense management page in FF, for example). Lately I’ve found that the Google+ script seems to kill firefox, so I use Chrome for logged-in Google activities.

But Chrome’s handling of certificates is abhorrent.  I’ve never seen anything so resolutely destructive to security and utility.  It is the most ill-considered, poorly implemented, counter-productive failure in UI design and security policy I’ve ever encountered.  It is hateful and obscene.  A disaster.  An abomination. The ill-conceived excrement of ignorant twits.  I’d be happy to share my unrestrained feelings privately.

It is a private network, you idiots

I’ve discussed the problem before, but the basic issues are that:

  • The certificate authority is NOT INVALID, Chrome just doesn’t recognize it because it is self-signed.  There is a difference, dimwits.
  • This is a private network (10.x.x.x or 192.168.x.x) and if you pulled your head out for a second and thought about it, white-listing private networks is obvious.  Why on earth would anyone pay the cert mafia for a private cert?  Every web-interfaced appliance in existence automatically generates a self-signed cert, and Chrome flags every one of them as a security risk INCORRECTLY.
  • A “valid” certificate merely means that one of the zillions of cert mafia organizations ripping people off by pretending to offer security has “verified” the “ownership” of a site before taking their money and issuing a certificate that placates browsers
  • Or a compromised certificate is being used.
  • Or a law enforcement certificate is being used.
  • Or the site has been hacked by criminals or some country’s law enforcement.
  • etc.

A “valid” certificate doesn’t mean nothing at all, but close to it.

So one might think it is harmless security theater, like a TSA checkpoint: it does no real harm and may have some deterrent value.  It is a necessary fiction to ensure people feel safe doing commerce on the internet.  If a few percent of people are reassured by firm warnings and are thus seduced into consummating their shopping carts, improving ad traffic quality and thus ensuring Google’s ad revenue continues to flow, ensuring their servers continue sucking up our data, what’s the harm?

The harm is that it makes it hard to secure a website.  SSL does two things: it pretends to verify that the website you connect to is the one you intended to connect to (but it does not do this) and it does actually serve to encrypt data between the browser and the server, making eavesdropping very difficult.  The latter useful function does not require verifying who owns the server, which can only be done with a web of trust model like perspectives or with centralized, authoritarian certificate management.

How to fix Chrome:

The damage is done. Millions of websites that could be encrypted are not because idiots writing browsers have made it very difficult for users to override inane, inaccurate, misleading browser warnings.  However, if you’re reading this, you can reduce the headache with a simple step (Thanks!):

Right click on the shortcut you use to launch Chrome and modify the launch command by adding the following “--ignore-certificate-errors

Unfuck chrome a bit.

Once you’ve done this, chrome will open with a warning:

zomg: ignore certificate errors? who doesn't anyway?

YAY.  Suffer my ass.

Java?  What happened to Java?

Bonus rant

Java sucks so bad.  It is the second worst abomination loosed on the internet, yet lots of systems use it for useful features, or try to.  There’s endless compatibility problems with JVM versions and there’s the absolutely idiotic horror of the recent security requirement that disables setting “medium” security completely no matter how hard you want to override it, which means you can’t ever update past JVM 7.  Ever.  Because 8 is utterly useless because they broke it completely thinking they’d protect you from man in the middle attacks on your own LAN.

However, even if you have frozen with the last moderately usable version of Java, you’ll find that since Chrome 42 (yeah, the 42nd major release of chrome. That numbering scheme is another frustratingly stupid move, but anyway, get off my lawn) Java just doesn’t run in chrome.  WTF?

Turns out Google, happy enough to push their own crappy products like Google+, won’t support Oracle’s crappy product any more.  As of 42 Java is disabled by default.  Apparently, after 45 it won’t ever work again.  I’d be happy to see Java die, but I have a lot of infrastructure that requires Java for KVM connections, camera management, and other equipment that foolishly embraced that horrible standard.  Anyhow, you can fix it until 45 comes along…

To enable Java in Chrome for a little while longer, you can follow these instructions to enable NPAPI for chrome <42 (which enables Java).  Type “chrome://flags/#enable-npapi” in the browser bar and click “enable.”

Enable NAPI

Posted at 13:24:37 GMT-0700

Category: HowToSecurityTechnology

Superfish proves certs are useless for identification

Saturday, February 21, 2015 

Can we please, please stop with the stupid certificate verification warnings?

superfish logo

Dear security developers, your model is broken. It never worked. Stop warning people about certificate errors. Now. Forever.

Certificate errors serve two purposes:

  1. They make developers uncomfortable with using perfectly secure self-signed certs, and since commercial certs cost money, much of the web that could be encrypted remains unencrypted. That’s harm done to the public. Thanks.
  2. They happen so often, so relentlessly, for such trivial reasons (not even Google can keep their certs up to date) that users learn to ignore them, which makes an actual man-in-the-middle attack almost certain to succeed with most people, despite the warnings.

The Certificate Authority system is predicated on the idea that Certificate Authorities are flawless and trustworthy. They are neither. The Lenovo/Superfish problem shows another obvious flaw: hardware vendors (and actually any trusted software installer) has to be trustworthy too or client-side MITM is easy. And CA’s simply can’t verify against that.

This whole idiocy creates massive problems for something so basic as LAN administration. Even before wireless became pervasive, LAN coms should always be encrypted when passwords or any meaningful data is moving. Current security settings create a massive avalanche of useless errors for “untrustworthy certs” on one’s own network (the obvious fix is to automatically trust all certs on private networks, duh).

This is an issue that bothers me a lot. It gets in my way constantly and makes real security and encrypted communications way harder and way more complicated than it needs to be and the only beneficiaries at all are the certificate Mafiosi. This is just stupid. Superfish proves, again, how broken it is. Can we stop pretending now?

Also, this most recent of many certificate flaws comes with a bonus feature: the MITM cert Superfish uses is apparently really pathetically insecure, aside from using broken crypto, their software had their passwords in it, making it easy for crackers to develop tools to harvest additional data from the victims of the Superfish/Lenovo attack.It probably hurts more to find out your vendor hacked you, but the penalty is that the hack also destroyed the security of all of your communications. Thanks. This is why we can’t have nice things. It is also why any back door, no matter what the motive, compromises security.


 

Update: Superfish is, apparently, out of business.  While that sucks for the people at the company, who were probably very happy with their Lenovo OEM deal and instead got a big sock of coal, one might naively hope for an upside that companies considering a model based on stealing people’s data might take notice of the cautionary tale of superfish.

Unfortunately – that won’t happen, not in the current valley climate. While it is economically advantageous to hire cheap kids who have no life and will work long hours for meagre pay, they come with a downside: they are all ignorant idiots. I don’t mean they’re not smart or capable (though the smart barrel was long ago drained and the vast majority of brogrammers sauntering around SF really are stupid), rather that they are foolish as in the opposite of ‘wise.”  Wisdom comes from experience, and experience only comes with time, an immutable dimension.  This superfish debacle was only from Feb 2015, but this year’s batch of idiot brogrammers weren’t around to see it and as they gather in self-congratulatory clusters in posh, VC-funded collaborative spaces, company barrista-brewed latte in one hand and social-media-distraction feeding portable device in the other, they’ll be high-fiveing and fist-bumping the brilliance of their brand-new idea for getting around SSL so they can collect marketing data and better target advertising.  Yay.


 

How to fix Superfish:

Install Perspectives. And support them.

Also, this bugs the crap out of me:

Overthrow the Cert Mafia!

SSL for Authentication Sucks

Unbreaking Firefox SSL Behavior

The CA System is Intractably Broken

Posted at 02:45:41 GMT-0700

Category: SecurityTechnology

Sony-style Attacks and eMail Encryption

Friday, December 19, 2014 

Some of the summaries of the Sony attacks are a little despairing of the viability of internet security, for example Schneier:

This could be any of us. We have no choice but to entrust companies with our intimate conversations: on email, on Facebook, by text and so on. We have no choice but to entrust the retailers that we use with our financial details. And we have little choice but to use butt services such as iButt and Google Docs.

I respectfully disagree with some of the nihilism here: you do not need to put your data in the butt. Butt services are “free,” but only because you’re the product.  If you think you have nothing to hide and privacy is dead and irrelevant, you are both failing to keep up with the news and extremely unimaginative. You think you have no enemies?  Nobody would do you wrong for the lulz?  Nobody who would exploit information leaks for social engineering to rip you off?

Use butt services only when the function the service provides is predicated on a network effect (like Facebook) or simply can’t be replicated with individual scale resources (Google Search).  Individuals can reduce the risk of being a collateral target by setting up their own services like an email server, web server, chat server, file server, drop-box style server, etc. on their own hardware with minimal expertise (and the internet is actually full of really good and expert help if you make an honest attempt to try), or use a local ISP instead of relying on a global giant that is a global target.

Email Can be Both Secure AND Convenient:

But there’s something this Sony attack has made even more plain: eMail security is bad.  Not every company uses the least insecure email system possible and basically invites hackers to a data smorgasborg like Sony did by using outlook (I mean seriously, they can’t afford an IT guy who’s expertise extends beyond point-n-click?  Though frankly the most disappointing deployment of outlook is by MIT’s IT staff.  WTF?).

As lame as that is, email systems in general suffer from an easily remediated flaw: email is stored on the server in plain text which means that as soon as someone gets access to the email server, which is by necessity of function always globally network accessible, all historical mail is there for the taking.

Companies institute deletion policies where exposed correspondence is minimized by auto-deleting mail after a relatively short period, typically about as short as possible while still, more or less, enabling people to do their jobs.  This forced amnesia is a somewhat pathetic and destructive solution to what is otherwise an excellent historical resource: it is as useful to the employees as to hackers to have access to historical records and forced deletion is no more than self-mutilation to become a less attractive target.

It is trivial to create a much more secure environment with no meaningful loss of utility with just a few simple steps.

Proposal to Encrypt eMail at Rest:

I wrote in detail about this recently.  I realize it is a TLDR article, but as everyone’s wound up about Sony, a summary might serve as a lead-in for the more actively procrastinating. With a few very simple fixes to email clients (which could be implemented with a plug-in) and to email servers (which can be implemented via mail scripting like procmail or amavis), email servers can be genuinely secure against data theft.  These fixes don’t exist yet, but the two critical but trivial changes are:

Step One: Server Fix

  • Your mail server will have your public key on it (which is not a security risk) and use it to encrypt every message before delivering it to your mailbox if it didn’t come in already encrypted.

This means all the mail on the sever is encrypted as soon as it arrives and if someone hacks in, the store of messages is unreadable.  Maybe a clever hacker can install a program to exfiltrate incoming messages before they get encrypted, but doing this without being detected is very difficult and time consuming.  Grabbing an .ost file off some lame Windows server is trivial. I don’t mean to engage in victim blaming, but seriously, if you don’t want to get hacked, don’t go out wearing Microsoft.

Encrypting all mail on arrival is great security, but it also means that your inbox is encrypted and as current email clients decrypt your mail for viewing, but then “forget” the decrypted contents, encrypted messages are slower to view than unencrypted ones and, most crippling of all, you can’t search your encrypted mail.  This makes encrypted mail unusable, which is why nobody uses it after decades. This unusability is a tragic and pointless design flaw that originated to mitigate what was then, apparently, a sore spot with one of Phil’s friends who’s wife had read his correspondence with another woman and divorce ensued; protecting the contents of email from client-side snooping has ever since been perceived as critical.1I remember this anecdote from an early 1990’s version of PGP.  I may be mis-remembering it as the closest reference I can find is this FAQ:

It was a well-intentioned design constraint and has become a core canon of the GPG community, but is wrong-headed on multiple counts:

  1. An intimate partner is unlikely to need the contents of the messages to reach sufficient confidence in distrust: the presence of encrypted messages from a suspected paramour would be more than sufficient cause for a confrontation.
  2. It breaks far more frequent use such as business correspondence where operational efficiency is entirely predicated on content search which doesn’t work when the contents are encrypted.
  3. Most email compromises happen at the server, not at the client.
  4. Everyone seems to trust butt companies to keep their affairs private, much to the never-ending lulz of such companies.
  5. Substantive classes of client compromises, particularly targeted ones, capture keystrokes from the client, meaning if the legitimate user has access to the content of the messages, so too does the hacker, so the inconvenience of locally encrypted mail stores gains almost nothing.
  6. Server attacks are invisible to most users and most users can’t do anything about them.  Users, like Sony’s employees, are passive victims of sysadmin failures. Client security failures are the user’s own damn fault and the user can do something about them like encrypting the local storage of their device which protects their email and all their other sensitive and critical selfies, sexts, purchase records, and business correspondence at the same time.
  7. If you’re personally targeted at the client side, that some of your messages are encrypted provides very little additional security: the attacker will merely force you to reveal the keys.

Step Two: Client Fix

  • Your mail clients will decrypt your mail automatically and create local stores of unencrypted messages on your local devices.

If you’ve used GPG, you probably can’t access any mail you got more than a few days ago; it is dead to you because it is encrypted.  I’ve said before this makes it as useless as an ephemeral key encrypted chat but without the security of an ephemeral key in the event somebody is willing to force you to reveal your key and is interested enough to go through your encrypted data looking for something.  They’ll get it if they want it that bad, but you won’t be bothered.

But by storing mail decrypted locally and by decrypting mail as it is downloaded from the server, the user gets the benefit of “end-to-end encryption” without any of the hassles.

GPG-encrypted mail would work a lot more like an OTR encrypted chat.  You don’t get a message from OTR that reads “This chat message is encrypted, do you want to decrypt it?  Enter your password” every time you get a new chat, nor does the thread get re-encrypted as soon as you type something, requiring you to reenter your key to review any previous chat message.  That’d be idiotic.  But that’s what email does now.

Adoption Matters

These two simple changes would mean that server-side mail stores are secure, but just as easy to use and as accessible to clients as they are now.  Your local device security, as it is now, would be up to you.  You should encrypt your hard disk and use strong passwords because sooner or later your personal device will be lost or stolen and you don’t want all that stuff published all over the internet, whether it comes from your mail folder or your DCIM folder.

It doesn’t solve a targeted attack against your local device, but you’ll always be vulnerable to that and pretending that storing your encrypted email on your encrypted device in an encrypted form adds security is false security that has the unfortunate side effect of reducing usability and thus retarding adoption of real security.

If we did this, all of our email will be encrypted, which means there’s no additional hassle to getting mail that was encrypted with your GPG key by the sender (rather than on the server).  The way it works now, GPG is annoying enough to warrant asking people not to send encrypted mail unless they have to, which tags that mail as worth encrypting to anyone who cares.  By eliminating the disincentive, universally end-to-end encrypted email would become possible.

A few other minor enhancements that would help to really make end-to-end, universally encrypted email the norm include:

  • Update mail clients to prompt for key generation along with any new account (the only required option would be a password, which should be different from the server-log-in password since a hash of that has to be on the server and a hash crack of the account password would then permit decryption of the mail there, so UX programmers take note!)
  • Update address books, vcard, and LDAP servers so they expect a public key for each correspondent and complain if one isn’t provided or can’t be found.  An email address without a corresponding key should be flagged as problematic.
  • Corporate and hierarchical organizations should use a certificate authority-based key certification system, everyone else should use web-of-trust/perspectives style key verification, which can be easily automated to significantly reduce the risk of MitM attacks.

This is easy. It should have been done a long time ago.

 

Footnotes

Footnotes
1 I remember this anecdote from an early 1990’s version of PGP.  I may be mis-remembering it as the closest reference I can find is this FAQ:
Posted at 16:21:29 GMT-0700

Category: FreeBSDPrivacySecurityTechnology

SSL for Authentication Sucks

Wednesday, November 26, 2014 

One of the most horrible mistakes made in the early days of the internet was to use SSL (an “HTTPS” connection) for both securing a connection with encryption and verifying that the server you reach matches the URL you entered.

Encryption is necessary so you can’t be spied on by anyone running wireshark on the same hotspot you’re on, something that happens all the time, every day, to everyone connecting to public wifi, which means just about everyone just about any time they take a wifi device out of the house.  It is pretty certain that you – you yourself – have thwarted cybercrime attempts thanks to SSL, not just once but perhaps dozens of times a day, depending on how often you go to Starbucks.

The second purpose, attempting to guarantee that the website you reached is served by the owner of the domain name as verified by some random company you’ve never heard of is an attempt to thwart so-called “Man in the Middle” (MITM) and DNS poisoning attacks.  While these are also fairly easy (especially the latter), they’re both relatively uncommon and the “fix” doesn’t work anyway.

In practice, the “fix” can be detrimental because it gives a false sense of security to that sliver of the population that knows enough to be aware that the browser bar ever shows a green lock or any other indicator of browser trust and not aware enough to realize that the indicator is a lie. It is beyond idiotic that our browsers make a big show of this charade of identity verification with great colorful warnings of non-compliance whenever detected to order to force everyone to pay off the cert mafia and join in the protection racket of pretending that their sites are verified.

I’ve written before why this is counterproductive, but the basic problems is that browsers ship with a set of “root” certificates1You can review a list of the certificates of trusted Certificate Authorities here. Note that the list includes state-agency certificates from countries with controversial human rights records. that they trust for no good reason at all except that there’s a massive payola racket and if you’re a certificate issuer with a distributed accepted CA certificate you can print money by charging people absurd fees for executing a script on your server which, at zero cost to the operator, “signs” their certificate request (oh please, please great cert authority sign my request) so that browsers will accept it without warning.  It isn’t like they actually have the owner of the site come in to their office, show ID, and verify they are who they say they are.  Nobody does that except CACert; which is a free service and, surprise, their root cert is not included in any shipping browser.

Users then will typically “trust” that the site they’re connecting to is actually the one they expected when they typed in a URL.  Except they didn’t type a URL, they clicked on a link and they really have no idea where there browser is going and will not read the URL in the browser bar anyway and bankomurica.com is just as valid as bankofamerica.com, so the typical user has no clue where the browser thinks it is going and a perfectly legit, valid cert can be presented for a confusing (or not really so much) URL.  Typosquatters and pranksters have exploited this very successfully and have proven beyond any doubt that pretending that a URL is an unambiguous identifier is foolish and so too, therefore, is proving that the connection between the browser and the URL hasn’t been hijacked.

Further, law enforcement in most countries require that service providers ensure that it is possible to surreptitiously intercept communications on the web: that is do the exact thing we’re sold that a “valid” certificate makes “impossible.” In practice they get what are called “lawful intercept” certificates which are a bit like fireman’s key that doesn’t compromise your security because only a fireman would ever, ever have one..  Countries change hands and so do these.  If you think you’re a state-level target and certificate signing has any value, you’re actually putting your life at risk.  This is an immense disservice because there will be some people at risk, under surveillance, who will actually pay attention to the green bar and think it means they are safe.  It does not.  They may die.  Really.

Commercial certs can cost thousands of dollars a year and they provide absolutely zero value to the site visitor except making the browser warnings go away so they can visit the site without dismissing meaningless and annoying warnings.  There is absolutely no additional value to the site operator for a commercial cert over a completely free self-signed cert except to make the browser warnings go away for their visitors.  The only entity that benefits from this is the certificate vendor from the fees they charge site operators and for the browser vendor for whatever fees are associated with including their certificates in the browser installer.  You, the internet user, just lose out because small sites don’t use encryption because they can’t afford certs or the hassle and so your security is compromised to make other people rich.

There are far better tools2The hierarchical security model that browsers currently use, referencing a certificate authority, does work well for top-down organizations like companies or the military (oddly, the US Military’s root certificates aren’t included in browsers).  In such a situation, it makes sense for a central authority to dictate what sources are trusted.  It just does not make sense in an unstructured public environment where the “authority” is unknown and their vouch means nothing.  that use a “Web Of Trust” model that was pioneered by PGP back in the early 1990s that actually does have some meaning and is used by CACert, meaning CACert certificates actually have some meaning when they indicate that the site you’re visiting is the one indicated by the URL, but since CACert doesn’t charge and therefore can’t afford to buy into the cert mafia, their root certs are not included in browsers, so you have to install it yourself.

The result is that a small website operator has four options:

  • Give up on security and expose all the content that moves between their server and their visitors to anyone snooping or logging,
  • Use a self-signed cert3If you’re running your own web services, for example a web-interface to your wifi router or a server or some other device with a web interface, it will probably use a self-signed cert and you’ve probably gotten used to clicking through the warnings, which at least diminishes the blackmail value of the browser warnings as people get used to ignoring them.  Installing certificates in Firefox is pretty easy.  It is a major hassle in Chrome or IE (because Chrome, awesome work Google, great job, uses IE’s certificate store, at least on Windows). Self-signed certs are used everywhere in IT management, almost all web-interfaced equipment uses them.   IBM has a fairly concise description of how to install the certs.  Firefox wins.  to encrypt traffic that will generate all sorts of browser warnings for their visitors in an attempt to extort money from them,
  • Use one of the free SSL certificate services that become increasingly annoying to keep up to date and provide absolutely zero authentication value but will encrypt traffic without generating warnings,
  • Use CACert and ask users to be smart enough to install the CACert root certificate and thus actually encrypt and reasonably securely prove ownership.

And, of course, agitate for rationality: Perspectives and the CACert root should ship with every browser install.

Footnotes

Footnotes
1 You can review a list of the certificates of trusted Certificate Authorities here. Note that the list includes state-agency certificates from countries with controversial human rights records.
2 The hierarchical security model that browsers currently use, referencing a certificate authority, does work well for top-down organizations like companies or the military (oddly, the US Military’s root certificates aren’t included in browsers).  In such a situation, it makes sense for a central authority to dictate what sources are trusted.  It just does not make sense in an unstructured public environment where the “authority” is unknown and their vouch means nothing.
3 If you’re running your own web services, for example a web-interface to your wifi router or a server or some other device with a web interface, it will probably use a self-signed cert and you’ve probably gotten used to clicking through the warnings, which at least diminishes the blackmail value of the browser warnings as people get used to ignoring them.  Installing certificates in Firefox is pretty easy.  It is a major hassle in Chrome or IE (because Chrome, awesome work Google, great job, uses IE’s certificate store, at least on Windows). Self-signed certs are used everywhere in IT management, almost all web-interfaced equipment uses them.   IBM has a fairly concise description of how to install the certs.  Firefox wins.
Posted at 15:50:20 GMT-0700

Category: SecurityTechnology

What’s Right About PGP

Thursday, August 14, 2014 

Occasionally you find the crankypants commentary about the “problems” with PGP. These commentaries are invariably written by people who fail to recognize the use modality that PGP is meant to address.

PGP is a cryptographic tool that is, genuinely, annoying to use in most current implementations (though I find the APG extension to the K9 mail app on the android as easy or easier to use than the former Enigmail implementation for Thunderbird, since replaced by a less fully featured native implementation.)  The purpose of PGP is to encrypt the contents of mail messages sent between correspondents.  Characteristics of these messages are that they have more than ephemeral value (you might need to reference them again in the future) and that the correspondents are not attempting to hide the fact that they correspond.

It is intrinsic to the capabilities of the tool that it does not serve to hide with whom you are communicating (there are tools for doing this, but they involve additional complexity) and all messages encrypted with a single key can be decrypted with that key. As such keys are typically protected by a password the user must remember. It is a sufficiently accurate simplification of the process to consider the messages themselves protected by a password that the owner of the messages must remember and might possibly be forced to divulge as the fundamental limit on the security of the messages so protected. There are different tools for different purposes that exchange ephemeral keys that the user doesn’t ever know, aren’t protected by a mnemonic password, and therefore can never be forced to divulge).

These rants against PGP annoy me because PGP is an excellent tool that is marred by minor usability problems. Energy expended on ignorantly dismissing the tool is energy that could be better spent improving it.  By far the most important use cases for the vast majority of users that have any real reason to consider cryptography are only addressed by PGP.  I make such a claim based on the following:

  • Most business and important correspondence is conducted by email and despite the hyperventilation of some ignorant children, will remain so for the foreseeable future.
  • Important correspondence, more or less by definition, has a useful shelf life of more than one read and generally serves as a durable (and legally admissible) record.
  • There are people who have legitimate reasons to obfuscate their correspondents: email, even PGP encrypted email, is not a suitable tool for this task.
  • There are people who have legitimate reason to communicate messages that must not be permanently recorded and for which either the value of the communication is ephemeral or the risk is so great that destroying the archive is a reasonable trade-off: email, even with PGP, is not a suitable tool for this task.
  • There’s some noise in the rant about not being sufficient to protect against NSA targeted intercept or thwarting NSA data archiving, which makes an implicit claim that the author has some solution that might provide such protection to end users. I consider such claims tantamount to homicide.  If someone is targeted by state-level surveillance, they can’t use a Turing-complete device (any computer device) to communicate information that puts them at risk; any suggestion to the contrary is dangerous misinformation.

Current implementations of PGP have flaws:

  • For some reason, mail clients still don’t prompt for the import or generation of PGP keys whenever a new account is set up.  That’s somewhat pathetic.
  • For some reason, address books integrated into mail clients don’t have a field for the public key of the associate.  This is a bizarre omission that necessitates add-on key management plug-ins that just make the process more complicated.
  • It is somewhat complicated by IMAP, but no client stores encrypted messages locally in unencrypted form (by default, Thunderbird can do this now), which makes them difficult to search and reduces their value as an archival record.  This has trivial security value: your storage device is, of course, encrypted or exposing your email should your device be lost is likely to be the least of your problems.

PGP is, despite these shortcomings, one of the most important cryptographic tools available.

Awesome properties of PGP keys no other cryptographic system can touch

PGP keys are (like all cryptographic keys in use by any system) long strings of seemingly random data.  The more seemingly random, the better.  They are, by that very nature, nonmnemonic.  Public key cryptosystems, like PGP, have an awesome, incredibly useful characteristic that you can publish your public key (a long, random string of numbers) and someone you’ve never met can encrypt a message using that public key and only your private key can decrypt it: a random stranger can initiate crytopgraphically secure communication spontaneously.

Conversely, you can “sign” data with your “private key” and anyone can verify that you signed it by decrypting it with your public key (or more precisely a short mathematical summary of your message).  This is so secure, it is a federally accepted signature mechanism.

There’s a hypothesized attack called a Man In The Middle attack (often abbreviated “MITM”) that exploits the fact these keys aren’t really human readable (you can, but they’re so long you won’t) whereby an attacker (traditionally the much maligned Eve) intercepts messages between two parties (traditionally the secretive Alice and Bob), pretending to be Bob whilst communicating with Alice and pretending to be Alice whilst communicating with Bob.  By substituting her keys for Alice’s and Bob’s, both Bob and Alice inadvertently send messages that Eve can decrypt and she “simply” forwards Bob’s to Alice using Alice’s public key and vice versa so they decrypt as expected, despite coming from the evil Eve.

Eve must, however, be able to intercept all of Alice and Bob’s communication or her attack may be discovered when the keys change, which is not practical in the real world on an ongoing basis (but, ironically, is easier with ephemeral keys). Pretending to be someone famous is easier and could be more valuable as people you don’t know might send you unsolicited private correspondence intended for the famous person: the cure is widely disseminating key “fingerprints” to make the discovery of false keys very hard to prevent.  And if you expect people to blindly send you high-priority information with your public key, you have an obligation to mitigate the risk of a false recipient.

Occasionally it is hypothesized that this attack compromises the utility of PGP; it is a shortcoming of all cryptographic systems that the keys are not human readable if they are even marginally secure.  It is intrinsic to a public key infrastructure that the public keys must be exchanged.  It is therefore axiomatic that a PKI-based cryptographic system will be predicated on mechanisms to exchange nonmnemonic key information. And hidden key exchange, as implemented by OTR or other ephemeral key systems makes MITM attacks harder to detect.

While it is true that elliptic curve PKI algorithms achieve equivalent security with shorter keys, they are still far too long to be mnemonic.  One might nominally equivalence a 4k RSA key with a 0.5k elliptic curve key, a non-trivial factor of 8 reduction with some significance to algorithmic efficiency, but no practical difference in human readability.  Migrating to elliptic curves is on the roadmap for PGP (with GPG 2.1, now in beta) and should be expedited.

PGP Key management is a little annoying

Actually, it  isn’t so much PGP that makes this true, but rather the fact that mail clients haven’t integrated PGP into the client.  That Gmail and Yahoo mail will soon be integrating PGP into their mail clients is a huge step in the right direction even if integrating encryption into a webmail client is kind of pointless since the user is already clearly utterly unconcerned with privacy at all if they’re gifting Google or Yahoo their correspondence.  Why people who should know better still use Gmail is a mystery to me.  When people who care about data security use a gmail address it is like passing the temperance preacher passed out drunk in the gutter.  With every single message sent.  Even so, this is a step in the right direction by some good people at Google.

It is tragic that Mozilla has back-burnered Thunderbird, but on the plus side they don’t screw up the interface with pointless changes to justify otherwise irrelevant UX designers as does every idiotic change in Firefox with each release.  Hopefully the remaining community will rally around full integration of PGP following the astonishingly ironic lead of the privacy exploiting industry.

If keys were integrated into address books in every client and every corporate LDAP server, it would go a long way toward solving the valid annoyances with PGP key management; however, in my experience key management is never the sticking point, it is either key generation or the hassle of trying to deal with data rendered opaque and nearly useless by residual encryption of the data once it has reached me.

Forward Secrecy has a place.  It isn’t email.

A complaint levied against PGP that proves beyond any doubt that the complainant doesn’t understand the use case of PGP is that it doesn’t incorporate forward secrecy.  Forward secrecy is a consequence of a cryptosystem that negotiates a new key for each message thread which is not shared with the users and which the system doesn’t store.  By doing this, the correspondents cannot be forced to reveal the keys to decrypt the contents of stored or captured messages since they don’t know them.  Which also means they can’t access the contents of their stored messages because they’re encrypted with keys they don’t know.  You can’t read your own messages.  There are messaging modalities where such a “feature” isn’t crippling, but email isn’t one of them; sexting perhaps, but not email.

Indeed, the biggest, most annoying, most discouraging problem with PGP is that clients do not insert the unencrypted message into the local message store after decrypting it.  This forces the user to decrypt the message again each time they need to reference it, if they can ever find it again.  One of the problems with this is you can’t search encrypted messages without decrypting them.  No open source client I’m aware of has faced this debilitating failure of use awareness, though Symantec’s PGP desktop does (so it is solvable).  Being naive about message use wouldn’t have been surprising for the first few months of GPG’s general use, but that this failure persists after decades is somewhat shocking and frustrating.  It is my belief that the geekiness of most PGP interfaces has so limited use that most people (myself included) aren’t crippled by not being able to find PGP encrypted mail because we get so little of it.  If even a small percentage of our mail was encrypted, not ever being able to find it again would be a disaster and we’d stop using encryption.

This is really annoying because messages have the frequently intolerable drawbacks of being ephemeral without the cryptographic value of forward secrecy.

Email is normally used as a messaging modality of record.  It is the way in which we exchange contemplative comments and data that exceeds a sentence or so.  This capability remains important to almost all collaborative efforts and reducing messaging complexity to chat bubbles cripples cognitive complexity.  The record thus created has archival value and is a fundamental requirement in many environments.  Maximizing the availability, searchability, and ease of recall of this archive is essential.  Indeed, even short form communication (“chat” in various forms), which is typically amenable to forward secrecy because of the generally low content value thus communicated, should have the option of PGP encryption instead of just OTR in order to create a secure but archival communications channel.

A modest proposal

I’ve been using PGP since the mid 1990s.  I have a key from early correspondence on PGP from 1997 and mine is from 1998.  Yet while I have about 2,967 contacts in my address book I have only 139 keys in my GPG keyring.  An adoption rate of 4.7% for encrypted email isn’t exactly a wild success.  I don’t think the problems are challenging and while I very much appreciate the emergence of cryptographically secure communications modalities such as OTR for chat and ZRTP for voice, I’ve been waiting for decades for easy-to-use secure email.  And yet, when people ask me to help them set up encrypted email, I generally tell them it is complicated, I’m willing to help them out, but they probably won’t end up using it.  Over the years, a few relatively easy to fix issues have retarded even my own use:

  • The fact that users have to find and install a somewhat complex plugin to handle encryption is daunting to the vast majority of users.  Enigmail is complicated enough that it is unusable without in-person walk-through support for most users.  Even phone support doesn’t get most people through setup. Basic GPG key generation and management should be built into the mail client.  Every time one sets up a new account, you should have to opt out of setting up a public key and there’s no reason for any options by default other than entering a password to protect the private key.
  • Key fields should be built into the address book of every mail client by default.  Any mail client that doesn’t support a public key field should be shamed and ridiculed.  That’s all of them until Gmail releases end-to-end as a default feature, though that may never happen as that breaks Google’s advertising model.  Remember, Google pays all their developers and buys them all lunch solely by selling your private data to advertisers.  That is their entire business model. They do not consider this “evil,” but you might.
  • I have no idea why my received encrypted mail is stored encrypted on my encrypted hard disk along with hundreds of thousands of unencrypted messages and tens of thousands of unencrypted documents.  Like any sensible person who takes a digital device out of the house (or leaves it unprotected in the house), I encrypt my local storage to protect those messages and documents from theft and exploitation.  My encrypted email messages are merely data cruft I can’t make much use of since I can’t search for them.  That’s idiotic and cripples the most important use modality of email: the persistent record. Any mail client should permanently decrypt the local message store unless the user specifically requests a message be stored encrypted, an option that should be the same for a message that arrived encrypted or unencrypted as the client could encrypt mail with the user’s public key on arrival without requiring a password or access to the private key.
  • Once we solve the client storage failure and make encrypted email useful for something other than sending attachments (which you can save, ZOMG, in unencrypted form) and feeling clever for having gotten the magic decoder ring to work, then it would make sense to modify mail servers to encrypt all unencrypted incoming mail with the user’s public key, which mitigates a huge risk in having a mail server accessible on the internet: that the historical store of data there contained is remotely compromised.  This protects data at rest (data which is often, but not assuredly, already protected in transit by encrypted transport protocols using ephemeral keys with forward security.)  End-to-End encryption using shared public keys is still optimal, but leaving the mail store unencrypted at rest is an easily solved at rest security failure while protection in transit is largely solved (and would be quickly if gmail bounced any SMTP connection not protected by TLS 1.2+.)

Fixing the obvious usability flaws in encrypted email are fairly easy.  Public key cryptography in the form of PGP/GPG is an incredibly powerful and tremendously useful tool that has been hindered in uptake by limitations of perception and by overly stringent use cases that have created onerous limitations.  Adjusting the use model to match requirements would make PGP far more useful and far easier to convince people to use.

Phil Zimmerman’s essay “why I wrote PGP” applies today as much as it did in 1991:

What if everyone believed that law-abiding citizens should use postcards for their mail? If a nonconformist tried to assert his privacy by using an envelope for his mail, it would draw suspicion. Perhaps the authorities would open his mail to see what he’s hiding.

It has been more than 30 years and never has the need for universally encrypted mail been more obvious.  It is time to integrate PGP into all mail clients.

 

Posted at 14:39:09 GMT-0700

Category: FreeBSDLinuxTechnology

Xabber now uses Orbot: OTR+Tor

Sunday, November 3, 2013 

As of Sept 30 2013, Xabber added Orbot support. This is a huge win for chat security. (Gibberbot has done this for a long time, but it isn’t as user-friendly or pretty as Xabber and it is hard to convince people to use it).

The combination of Xabber and Orbot solves the three most critical problems in chat privacy: obscuring what you say via message encryption, obscuring who you’re talking to via transport encryption, and obscuring what servers to subpoena for at least the last information by onion routing. OTR solves the first and Tor fixes the last two (SSL solves the middle one too, though Tor has a fairly secure SSL ciphersuite, who knows what that random SSL-enabled chat server uses – “none?”)

There’s a fly in the ointment of all this crypto: we’ve recently learned a few entirely predictable (and predicted) things about how communications are monitored:

1) All communications are captured and stored indefinitely. Nothing is ephemeral; neither a phone conversation nor an email, nor the web sites you visit. It is all stored and indexed should somebody sometime in the future decide that your actions are immoral or illegal or insidious or insufficiently respectful this record may be used to prove your guilt or otherwise tag you for punishment; who knows what clever future algorithms will be used in concert with big data and cloud services to identify and segregate the optimal scapegoat population for whatever political crises is thus most expediently deflected. Therefore, when you encrypt a conversation it has to be safe not just against current cryptanalytic attacks, but against those that might emerge before the sins of the present are sufficiently in the past to exceed the limitations of whatever entity is enforcing whatever rules. A lifetime is probably a safe bet. YMMV.

2) Those that specialize in snooping at the national scale have tools that aren’t available to the academic community and there are cryptanalytic attacks of unknown efficacy against some or all of the current cryptographic protocols. I heard someone who should know better poo poo the idea that the NSA might have better cryptographers than the commercial world because the commercial world pays better, as if the obsessive brilliance that defines a world-class cryptographer is motivated by remuneration. Not.

But you can still do better than nothing while understanding that a vulnerability to the NSA isn’t likely to be an issue for many, though if PRISM access is already being disseminated downstream to the DEA, it is only a matter of time before politically affiliated hate groups are trolling emails looking for evidence of moral turpitude with which to tar the unfaithful. Any complacency that might be engendered by not being a terrorist may be short lived. Enjoy it while it lasts.

And thus (assuming you have an Android device) you can download Xabber and Orbot. Xabber supports real OTR, not the fake-we-stole-your-acronym-for-our-marketing-good-luck-suing-us “OTR” (they did, but that link is gone now) that Google hugger-muggers and caromshotts you into believing your chats are ephemeral with (of course they and all their intelligence and commercial data mining partners store your chats, they just make it harder for your SO to read your flirty transgressions). Real OTR is a fairly strong, cryptographically secured protocol that transparently and securely negotiates a cryptographic key to secure each chat, which you never know and which is lost forever when the chat is over. There’s no open community way to recover your chat (that is, the NSA might be able to but we can’t). Sure, your chat partner can screen shot or copy-pasta the chat, but if you trust the person you’re chatting with and you aren’t a target of the NSA or DEA, your chat is probably secure.

But there’s still a flaw. You’re probably using Google. So anyone can just go to Google and ask them who you were chatting with, for how long, and about how many words you exchanged. The content is lost, but there’s a lot of meta-data there to play with.

So don’t use gchat if you care about that. It isn’t that hard to set up a chat server.

But maybe you’re a little concerned that your ISP not know who you’re chatting with. Given that your ISP (at the local or national level) might have a bluecoat device and could easily be man-in-the-middling every user on their network simultaneously, you might have reason to doubt Google’s SSL connection. While OTR still protects the content of your chat, an inexpensive bluecoat device renders the meta information visible to whoever along your coms path has bought one. This is where Tor comes in. While Google will still know (you’re still using Google even after they lied to you about PRISM and said, in court, that nobody using Gmail has any reasonable expectation of privacy?) your ISP (commercial or national) is going to have a very hard time figuring out that you’re even talking to Google, let alone with whom. Even the fact that you’re using chat is obscured.

So give Xabber a try. Check out Orbot, the effortless way to run it over Tor. And look into alternatives to cloud providers for everything you do.

Posted at 08:50:47 GMT-0700

Category: FreeBSDSelf-publishingTechnology