Cryptography

Projecting Qubit Realizations to the Cryptopocalpyse Date

Friday, August 4, 2023 

 

RSA 2048 is predicted to fail by 2042-01-15 at 02:01:28.
Plan your bank withdrawals accordingly.

 

Way back in the ancient era of 2001, long before the days of iPhones, back when TV was in black and white and dinosaurs still roamed the earth, I delivered a talk on quantum computing at DEF CON 9.0.  In the conclusion I offered some projections about the growth of quantum computing based on reported growth of qubits to date. Between the first qubit in 1995 and the 8 qubit system announced before my talk in 2001, qubits were doubling about every 2 years.

I drew a comparison with Moore’s law that computers double in power every 18 months, or as 2(years/1.5). A feature of quantum computers is that the power of a quantum computer increases as the power of the number of qubits, which is itself doubling at some rate, then two years, or as 22(years/2), or, in ASCII: Moore’s law is 2^(Y/1.5) and Gessel’s law is 2^2^(Y/2).

Quantum Computing and Cryptography 2001 7.0 Conclusion slide

As far as I know, nobody has taken up my formulation of quantum computing power as a time series double exponential function of the number of qubits in a parallel structure to Moore’s law. It seems compelling, despite obviously having a few (minor) flaws. A strong counter argument to my predictions is that useful quantum computers require stable, actionable qubits, not noisy ones that might or might not be in a useful state when measured. Data on stable qubit systems is still too limited to extrapolate meaningfully, though a variety of error correction techniques have been developed in the past two decades to enable working, reliable quantum computers. Those error correction techniques work by combining many “raw” qubits into a single “logical” qubit at around a 10:1 ratio, which certainly changes the regression substantially, though not the formulation of my “law.”

I generated a regression of qubit growth along the full useful quantum computer history, 1998–2023, and performed a least-squares fit to an exponential doubling period and got 3.376 years, quite a bit slower than the heady early years’ 2.0 doubling rate. On the other hand, fitting an exponential curve to all announcements in the modern 2016–2023 period yields a doubling period of only 1.074 years. The qubit doubling period is only 0.820 years if we fit to just the most powerful quantum computers released, ignoring various projects’ lower-than-maximum qubit count announcements; I can see arguments for either though selected the former as somewhat less aggressive.

Relative Power of Classical vs. Quantum Computers

From this data, I offer a formulation of what I really hope someone else somewhere will call, at least once, “Gessel’s Law,” P = 22(y/1.1) or, more generally given that we still don’t have enough data for a meaningful regression, P = 22(y/d); quantum computational power will grow as 2 to the power 2 to the power years over a doubling period which will become more stable as the physics advance.

Gidney & Ekra (of Google) published How to factor 2048-bit RSA integers in 8 hours using 20 million noisy qubits, 2021-04-13. So far for the most efficient known (as in not hidden behind classification, should such classified devices exist) explicit algorithm for cracking RSA. The qubit requirement, 2×10⁷, is certainly daunting, but with a doubling time of 1.074 years, we can expect to have a 20,000,000 qubit computer by 2042. Variations will also crack Diffie-Hellman and even elliptic curves, creating some very serious security problems for the world not just from the failure of encryption but the exposure of all so-far encrypted data to unauthorized decryption.

Based on the 2016–2023 all announcements regression and Gidney & Ekra, we predict RSA 2048 will fall on 2042-01-15 at 2am., a prediction not caveated by the error correction requirement for stable qubits as they count noisy, raw, cubits as I do. As a validity check, my regression predicts “Quantum Supremacy” right at Google’s 2022 announcement.

 

Qubit Realization by Date and several regression curve fits to the data

IQM Quantum Computer Espoo Finland, by Ragsxl

Posted at 05:34:25 GMT-0700

Category: PrivacyTechnology

SSL for Authentication Sucks

Wednesday, November 26, 2014 

One of the most horrible mistakes made in the early days of the internet was to use SSL (an “HTTPS” connection) for both securing a connection with encryption and verifying that the server you reach matches the URL you entered.

Encryption is necessary so you can’t be spied on by anyone running wireshark on the same hotspot you’re on, something that happens all the time, every day, to everyone connecting to public wifi, which means just about everyone just about any time they take a wifi device out of the house.  It is pretty certain that you – you yourself – have thwarted cybercrime attempts thanks to SSL, not just once but perhaps dozens of times a day, depending on how often you go to Starbucks.

The second purpose, attempting to guarantee that the website you reached is served by the owner of the domain name as verified by some random company you’ve never heard of is an attempt to thwart so-called “Man in the Middle” (MITM) and DNS poisoning attacks.  While these are also fairly easy (especially the latter), they’re both relatively uncommon and the “fix” doesn’t work anyway.

In practice, the “fix” can be detrimental because it gives a false sense of security to that sliver of the population that knows enough to be aware that the browser bar ever shows a green lock or any other indicator of browser trust and not aware enough to realize that the indicator is a lie. It is beyond idiotic that our browsers make a big show of this charade of identity verification with great colorful warnings of non-compliance whenever detected to order to force everyone to pay off the cert mafia and join in the protection racket of pretending that their sites are verified.

I’ve written before why this is counterproductive, but the basic problems is that browsers ship with a set of “root” certificates1You can review a list of the certificates of trusted Certificate Authorities here. Note that the list includes state-agency certificates from countries with controversial human rights records. that they trust for no good reason at all except that there’s a massive payola racket and if you’re a certificate issuer with a distributed accepted CA certificate you can print money by charging people absurd fees for executing a script on your server which, at zero cost to the operator, “signs” their certificate request (oh please, please great cert authority sign my request) so that browsers will accept it without warning.  It isn’t like they actually have the owner of the site come in to their office, show ID, and verify they are who they say they are.  Nobody does that except CACert; which is a free service and, surprise, their root cert is not included in any shipping browser.

Users then will typically “trust” that the site they’re connecting to is actually the one they expected when they typed in a URL.  Except they didn’t type a URL, they clicked on a link and they really have no idea where there browser is going and will not read the URL in the browser bar anyway and bankomurica.com is just as valid as bankofamerica.com, so the typical user has no clue where the browser thinks it is going and a perfectly legit, valid cert can be presented for a confusing (or not really so much) URL.  Typosquatters and pranksters have exploited this very successfully and have proven beyond any doubt that pretending that a URL is an unambiguous identifier is foolish and so too, therefore, is proving that the connection between the browser and the URL hasn’t been hijacked.

Further, law enforcement in most countries require that service providers ensure that it is possible to surreptitiously intercept communications on the web: that is do the exact thing we’re sold that a “valid” certificate makes “impossible.” In practice they get what are called “lawful intercept” certificates which are a bit like fireman’s key that doesn’t compromise your security because only a fireman would ever, ever have one..  Countries change hands and so do these.  If you think you’re a state-level target and certificate signing has any value, you’re actually putting your life at risk.  This is an immense disservice because there will be some people at risk, under surveillance, who will actually pay attention to the green bar and think it means they are safe.  It does not.  They may die.  Really.

Commercial certs can cost thousands of dollars a year and they provide absolutely zero value to the site visitor except making the browser warnings go away so they can visit the site without dismissing meaningless and annoying warnings.  There is absolutely no additional value to the site operator for a commercial cert over a completely free self-signed cert except to make the browser warnings go away for their visitors.  The only entity that benefits from this is the certificate vendor from the fees they charge site operators and for the browser vendor for whatever fees are associated with including their certificates in the browser installer.  You, the internet user, just lose out because small sites don’t use encryption because they can’t afford certs or the hassle and so your security is compromised to make other people rich.

There are far better tools2The hierarchical security model that browsers currently use, referencing a certificate authority, does work well for top-down organizations like companies or the military (oddly, the US Military’s root certificates aren’t included in browsers).  In such a situation, it makes sense for a central authority to dictate what sources are trusted.  It just does not make sense in an unstructured public environment where the “authority” is unknown and their vouch means nothing.  that use a “Web Of Trust” model that was pioneered by PGP back in the early 1990s that actually does have some meaning and is used by CACert, meaning CACert certificates actually have some meaning when they indicate that the site you’re visiting is the one indicated by the URL, but since CACert doesn’t charge and therefore can’t afford to buy into the cert mafia, their root certs are not included in browsers, so you have to install it yourself.

The result is that a small website operator has four options:

  • Give up on security and expose all the content that moves between their server and their visitors to anyone snooping or logging,
  • Use a self-signed cert3If you’re running your own web services, for example a web-interface to your wifi router or a server or some other device with a web interface, it will probably use a self-signed cert and you’ve probably gotten used to clicking through the warnings, which at least diminishes the blackmail value of the browser warnings as people get used to ignoring them.  Installing certificates in Firefox is pretty easy.  It is a major hassle in Chrome or IE (because Chrome, awesome work Google, great job, uses IE’s certificate store, at least on Windows). Self-signed certs are used everywhere in IT management, almost all web-interfaced equipment uses them.   IBM has a fairly concise description of how to install the certs.  Firefox wins.  to encrypt traffic that will generate all sorts of browser warnings for their visitors in an attempt to extort money from them,
  • Use one of the free SSL certificate services that become increasingly annoying to keep up to date and provide absolutely zero authentication value but will encrypt traffic without generating warnings,
  • Use CACert and ask users to be smart enough to install the CACert root certificate and thus actually encrypt and reasonably securely prove ownership.

And, of course, agitate for rationality: Perspectives and the CACert root should ship with every browser install.

Footnotes

Footnotes
1 You can review a list of the certificates of trusted Certificate Authorities here. Note that the list includes state-agency certificates from countries with controversial human rights records.
2 The hierarchical security model that browsers currently use, referencing a certificate authority, does work well for top-down organizations like companies or the military (oddly, the US Military’s root certificates aren’t included in browsers).  In such a situation, it makes sense for a central authority to dictate what sources are trusted.  It just does not make sense in an unstructured public environment where the “authority” is unknown and their vouch means nothing.
3 If you’re running your own web services, for example a web-interface to your wifi router or a server or some other device with a web interface, it will probably use a self-signed cert and you’ve probably gotten used to clicking through the warnings, which at least diminishes the blackmail value of the browser warnings as people get used to ignoring them.  Installing certificates in Firefox is pretty easy.  It is a major hassle in Chrome or IE (because Chrome, awesome work Google, great job, uses IE’s certificate store, at least on Windows). Self-signed certs are used everywhere in IT management, almost all web-interfaced equipment uses them.   IBM has a fairly concise description of how to install the certs.  Firefox wins.
Posted at 15:50:20 GMT-0700

Category: SecurityTechnology

What’s Right About PGP

Thursday, August 14, 2014 

Occasionally you find the crankypants commentary about the “problems” with PGP. These commentaries are invariably written by people who fail to recognize the use modality that PGP is meant to address.

PGP is a cryptographic tool that is, genuinely, annoying to use in most current implementations (though I find the APG extension to the K9 mail app on the android as easy or easier to use than the former Enigmail implementation for Thunderbird, since replaced by a less fully featured native implementation.)  The purpose of PGP is to encrypt the contents of mail messages sent between correspondents.  Characteristics of these messages are that they have more than ephemeral value (you might need to reference them again in the future) and that the correspondents are not attempting to hide the fact that they correspond.

It is intrinsic to the capabilities of the tool that it does not serve to hide with whom you are communicating (there are tools for doing this, but they involve additional complexity) and all messages encrypted with a single key can be decrypted with that key. As such keys are typically protected by a password the user must remember. It is a sufficiently accurate simplification of the process to consider the messages themselves protected by a password that the owner of the messages must remember and might possibly be forced to divulge as the fundamental limit on the security of the messages so protected. There are different tools for different purposes that exchange ephemeral keys that the user doesn’t ever know, aren’t protected by a mnemonic password, and therefore can never be forced to divulge).

These rants against PGP annoy me because PGP is an excellent tool that is marred by minor usability problems. Energy expended on ignorantly dismissing the tool is energy that could be better spent improving it.  By far the most important use cases for the vast majority of users that have any real reason to consider cryptography are only addressed by PGP.  I make such a claim based on the following:

  • Most business and important correspondence is conducted by email and despite the hyperventilation of some ignorant children, will remain so for the foreseeable future.
  • Important correspondence, more or less by definition, has a useful shelf life of more than one read and generally serves as a durable (and legally admissible) record.
  • There are people who have legitimate reasons to obfuscate their correspondents: email, even PGP encrypted email, is not a suitable tool for this task.
  • There are people who have legitimate reason to communicate messages that must not be permanently recorded and for which either the value of the communication is ephemeral or the risk is so great that destroying the archive is a reasonable trade-off: email, even with PGP, is not a suitable tool for this task.
  • There’s some noise in the rant about not being sufficient to protect against NSA targeted intercept or thwarting NSA data archiving, which makes an implicit claim that the author has some solution that might provide such protection to end users. I consider such claims tantamount to homicide.  If someone is targeted by state-level surveillance, they can’t use a Turing-complete device (any computer device) to communicate information that puts them at risk; any suggestion to the contrary is dangerous misinformation.

Current implementations of PGP have flaws:

  • For some reason, mail clients still don’t prompt for the import or generation of PGP keys whenever a new account is set up.  That’s somewhat pathetic.
  • For some reason, address books integrated into mail clients don’t have a field for the public key of the associate.  This is a bizarre omission that necessitates add-on key management plug-ins that just make the process more complicated.
  • It is somewhat complicated by IMAP, but no client stores encrypted messages locally in unencrypted form (by default, Thunderbird can do this now), which makes them difficult to search and reduces their value as an archival record.  This has trivial security value: your storage device is, of course, encrypted or exposing your email should your device be lost is likely to be the least of your problems.

PGP is, despite these shortcomings, one of the most important cryptographic tools available.

Awesome properties of PGP keys no other cryptographic system can touch

PGP keys are (like all cryptographic keys in use by any system) long strings of seemingly random data.  The more seemingly random, the better.  They are, by that very nature, nonmnemonic.  Public key cryptosystems, like PGP, have an awesome, incredibly useful characteristic that you can publish your public key (a long, random string of numbers) and someone you’ve never met can encrypt a message using that public key and only your private key can decrypt it: a random stranger can initiate crytopgraphically secure communication spontaneously.

Conversely, you can “sign” data with your “private key” and anyone can verify that you signed it by decrypting it with your public key (or more precisely a short mathematical summary of your message).  This is so secure, it is a federally accepted signature mechanism.

There’s a hypothesized attack called a Man In The Middle attack (often abbreviated “MITM”) that exploits the fact these keys aren’t really human readable (you can, but they’re so long you won’t) whereby an attacker (traditionally the much maligned Eve) intercepts messages between two parties (traditionally the secretive Alice and Bob), pretending to be Bob whilst communicating with Alice and pretending to be Alice whilst communicating with Bob.  By substituting her keys for Alice’s and Bob’s, both Bob and Alice inadvertently send messages that Eve can decrypt and she “simply” forwards Bob’s to Alice using Alice’s public key and vice versa so they decrypt as expected, despite coming from the evil Eve.

Eve must, however, be able to intercept all of Alice and Bob’s communication or her attack may be discovered when the keys change, which is not practical in the real world on an ongoing basis (but, ironically, is easier with ephemeral keys). Pretending to be someone famous is easier and could be more valuable as people you don’t know might send you unsolicited private correspondence intended for the famous person: the cure is widely disseminating key “fingerprints” to make the discovery of false keys very hard to prevent.  And if you expect people to blindly send you high-priority information with your public key, you have an obligation to mitigate the risk of a false recipient.

Occasionally it is hypothesized that this attack compromises the utility of PGP; it is a shortcoming of all cryptographic systems that the keys are not human readable if they are even marginally secure.  It is intrinsic to a public key infrastructure that the public keys must be exchanged.  It is therefore axiomatic that a PKI-based cryptographic system will be predicated on mechanisms to exchange nonmnemonic key information. And hidden key exchange, as implemented by OTR or other ephemeral key systems makes MITM attacks harder to detect.

While it is true that elliptic curve PKI algorithms achieve equivalent security with shorter keys, they are still far too long to be mnemonic.  One might nominally equivalence a 4k RSA key with a 0.5k elliptic curve key, a non-trivial factor of 8 reduction with some significance to algorithmic efficiency, but no practical difference in human readability.  Migrating to elliptic curves is on the roadmap for PGP (with GPG 2.1, now in beta) and should be expedited.

PGP Key management is a little annoying

Actually, it  isn’t so much PGP that makes this true, but rather the fact that mail clients haven’t integrated PGP into the client.  That Gmail and Yahoo mail will soon be integrating PGP into their mail clients is a huge step in the right direction even if integrating encryption into a webmail client is kind of pointless since the user is already clearly utterly unconcerned with privacy at all if they’re gifting Google or Yahoo their correspondence.  Why people who should know better still use Gmail is a mystery to me.  When people who care about data security use a gmail address it is like passing the temperance preacher passed out drunk in the gutter.  With every single message sent.  Even so, this is a step in the right direction by some good people at Google.

It is tragic that Mozilla has back-burnered Thunderbird, but on the plus side they don’t screw up the interface with pointless changes to justify otherwise irrelevant UX designers as does every idiotic change in Firefox with each release.  Hopefully the remaining community will rally around full integration of PGP following the astonishingly ironic lead of the privacy exploiting industry.

If keys were integrated into address books in every client and every corporate LDAP server, it would go a long way toward solving the valid annoyances with PGP key management; however, in my experience key management is never the sticking point, it is either key generation or the hassle of trying to deal with data rendered opaque and nearly useless by residual encryption of the data once it has reached me.

Forward Secrecy has a place.  It isn’t email.

A complaint levied against PGP that proves beyond any doubt that the complainant doesn’t understand the use case of PGP is that it doesn’t incorporate forward secrecy.  Forward secrecy is a consequence of a cryptosystem that negotiates a new key for each message thread which is not shared with the users and which the system doesn’t store.  By doing this, the correspondents cannot be forced to reveal the keys to decrypt the contents of stored or captured messages since they don’t know them.  Which also means they can’t access the contents of their stored messages because they’re encrypted with keys they don’t know.  You can’t read your own messages.  There are messaging modalities where such a “feature” isn’t crippling, but email isn’t one of them; sexting perhaps, but not email.

Indeed, the biggest, most annoying, most discouraging problem with PGP is that clients do not insert the unencrypted message into the local message store after decrypting it.  This forces the user to decrypt the message again each time they need to reference it, if they can ever find it again.  One of the problems with this is you can’t search encrypted messages without decrypting them.  No open source client I’m aware of has faced this debilitating failure of use awareness, though Symantec’s PGP desktop does (so it is solvable).  Being naive about message use wouldn’t have been surprising for the first few months of GPG’s general use, but that this failure persists after decades is somewhat shocking and frustrating.  It is my belief that the geekiness of most PGP interfaces has so limited use that most people (myself included) aren’t crippled by not being able to find PGP encrypted mail because we get so little of it.  If even a small percentage of our mail was encrypted, not ever being able to find it again would be a disaster and we’d stop using encryption.

This is really annoying because messages have the frequently intolerable drawbacks of being ephemeral without the cryptographic value of forward secrecy.

Email is normally used as a messaging modality of record.  It is the way in which we exchange contemplative comments and data that exceeds a sentence or so.  This capability remains important to almost all collaborative efforts and reducing messaging complexity to chat bubbles cripples cognitive complexity.  The record thus created has archival value and is a fundamental requirement in many environments.  Maximizing the availability, searchability, and ease of recall of this archive is essential.  Indeed, even short form communication (“chat” in various forms), which is typically amenable to forward secrecy because of the generally low content value thus communicated, should have the option of PGP encryption instead of just OTR in order to create a secure but archival communications channel.

A modest proposal

I’ve been using PGP since the mid 1990s.  I have a key from early correspondence on PGP from 1997 and mine is from 1998.  Yet while I have about 2,967 contacts in my address book I have only 139 keys in my GPG keyring.  An adoption rate of 4.7% for encrypted email isn’t exactly a wild success.  I don’t think the problems are challenging and while I very much appreciate the emergence of cryptographically secure communications modalities such as OTR for chat and ZRTP for voice, I’ve been waiting for decades for easy-to-use secure email.  And yet, when people ask me to help them set up encrypted email, I generally tell them it is complicated, I’m willing to help them out, but they probably won’t end up using it.  Over the years, a few relatively easy to fix issues have retarded even my own use:

  • The fact that users have to find and install a somewhat complex plugin to handle encryption is daunting to the vast majority of users.  Enigmail is complicated enough that it is unusable without in-person walk-through support for most users.  Even phone support doesn’t get most people through setup. Basic GPG key generation and management should be built into the mail client.  Every time one sets up a new account, you should have to opt out of setting up a public key and there’s no reason for any options by default other than entering a password to protect the private key.
  • Key fields should be built into the address book of every mail client by default.  Any mail client that doesn’t support a public key field should be shamed and ridiculed.  That’s all of them until Gmail releases end-to-end as a default feature, though that may never happen as that breaks Google’s advertising model.  Remember, Google pays all their developers and buys them all lunch solely by selling your private data to advertisers.  That is their entire business model. They do not consider this “evil,” but you might.
  • I have no idea why my received encrypted mail is stored encrypted on my encrypted hard disk along with hundreds of thousands of unencrypted messages and tens of thousands of unencrypted documents.  Like any sensible person who takes a digital device out of the house (or leaves it unprotected in the house), I encrypt my local storage to protect those messages and documents from theft and exploitation.  My encrypted email messages are merely data cruft I can’t make much use of since I can’t search for them.  That’s idiotic and cripples the most important use modality of email: the persistent record. Any mail client should permanently decrypt the local message store unless the user specifically requests a message be stored encrypted, an option that should be the same for a message that arrived encrypted or unencrypted as the client could encrypt mail with the user’s public key on arrival without requiring a password or access to the private key.
  • Once we solve the client storage failure and make encrypted email useful for something other than sending attachments (which you can save, ZOMG, in unencrypted form) and feeling clever for having gotten the magic decoder ring to work, then it would make sense to modify mail servers to encrypt all unencrypted incoming mail with the user’s public key, which mitigates a huge risk in having a mail server accessible on the internet: that the historical store of data there contained is remotely compromised.  This protects data at rest (data which is often, but not assuredly, already protected in transit by encrypted transport protocols using ephemeral keys with forward security.)  End-to-End encryption using shared public keys is still optimal, but leaving the mail store unencrypted at rest is an easily solved at rest security failure while protection in transit is largely solved (and would be quickly if gmail bounced any SMTP connection not protected by TLS 1.2+.)

Fixing the obvious usability flaws in encrypted email are fairly easy.  Public key cryptography in the form of PGP/GPG is an incredibly powerful and tremendously useful tool that has been hindered in uptake by limitations of perception and by overly stringent use cases that have created onerous limitations.  Adjusting the use model to match requirements would make PGP far more useful and far easier to convince people to use.

Phil Zimmerman’s essay “why I wrote PGP” applies today as much as it did in 1991:

What if everyone believed that law-abiding citizens should use postcards for their mail? If a nonconformist tried to assert his privacy by using an envelope for his mail, it would draw suspicion. Perhaps the authorities would open his mail to see what he’s hiding.

It has been more than 30 years and never has the need for universally encrypted mail been more obvious.  It is time to integrate PGP into all mail clients.

 

Posted at 14:39:09 GMT-0700

Category: FreeBSDLinuxTechnology

Xabber now uses Orbot: OTR+Tor

Sunday, November 3, 2013 

As of Sept 30 2013, Xabber added Orbot support. This is a huge win for chat security. (Gibberbot has done this for a long time, but it isn’t as user-friendly or pretty as Xabber and it is hard to convince people to use it).

The combination of Xabber and Orbot solves the three most critical problems in chat privacy: obscuring what you say via message encryption, obscuring who you’re talking to via transport encryption, and obscuring what servers to subpoena for at least the last information by onion routing. OTR solves the first and Tor fixes the last two (SSL solves the middle one too, though Tor has a fairly secure SSL ciphersuite, who knows what that random SSL-enabled chat server uses – “none?”)

There’s a fly in the ointment of all this crypto: we’ve recently learned a few entirely predictable (and predicted) things about how communications are monitored:

1) All communications are captured and stored indefinitely. Nothing is ephemeral; neither a phone conversation nor an email, nor the web sites you visit. It is all stored and indexed should somebody sometime in the future decide that your actions are immoral or illegal or insidious or insufficiently respectful this record may be used to prove your guilt or otherwise tag you for punishment; who knows what clever future algorithms will be used in concert with big data and cloud services to identify and segregate the optimal scapegoat population for whatever political crises is thus most expediently deflected. Therefore, when you encrypt a conversation it has to be safe not just against current cryptanalytic attacks, but against those that might emerge before the sins of the present are sufficiently in the past to exceed the limitations of whatever entity is enforcing whatever rules. A lifetime is probably a safe bet. YMMV.

2) Those that specialize in snooping at the national scale have tools that aren’t available to the academic community and there are cryptanalytic attacks of unknown efficacy against some or all of the current cryptographic protocols. I heard someone who should know better poo poo the idea that the NSA might have better cryptographers than the commercial world because the commercial world pays better, as if the obsessive brilliance that defines a world-class cryptographer is motivated by remuneration. Not.

But you can still do better than nothing while understanding that a vulnerability to the NSA isn’t likely to be an issue for many, though if PRISM access is already being disseminated downstream to the DEA, it is only a matter of time before politically affiliated hate groups are trolling emails looking for evidence of moral turpitude with which to tar the unfaithful. Any complacency that might be engendered by not being a terrorist may be short lived. Enjoy it while it lasts.

And thus (assuming you have an Android device) you can download Xabber and Orbot. Xabber supports real OTR, not the fake-we-stole-your-acronym-for-our-marketing-good-luck-suing-us “OTR” (they did, but that link is gone now) that Google hugger-muggers and caromshotts you into believing your chats are ephemeral with (of course they and all their intelligence and commercial data mining partners store your chats, they just make it harder for your SO to read your flirty transgressions). Real OTR is a fairly strong, cryptographically secured protocol that transparently and securely negotiates a cryptographic key to secure each chat, which you never know and which is lost forever when the chat is over. There’s no open community way to recover your chat (that is, the NSA might be able to but we can’t). Sure, your chat partner can screen shot or copy-pasta the chat, but if you trust the person you’re chatting with and you aren’t a target of the NSA or DEA, your chat is probably secure.

But there’s still a flaw. You’re probably using Google. So anyone can just go to Google and ask them who you were chatting with, for how long, and about how many words you exchanged. The content is lost, but there’s a lot of meta-data there to play with.

So don’t use gchat if you care about that. It isn’t that hard to set up a chat server.

But maybe you’re a little concerned that your ISP not know who you’re chatting with. Given that your ISP (at the local or national level) might have a bluecoat device and could easily be man-in-the-middling every user on their network simultaneously, you might have reason to doubt Google’s SSL connection. While OTR still protects the content of your chat, an inexpensive bluecoat device renders the meta information visible to whoever along your coms path has bought one. This is where Tor comes in. While Google will still know (you’re still using Google even after they lied to you about PRISM and said, in court, that nobody using Gmail has any reasonable expectation of privacy?) your ISP (commercial or national) is going to have a very hard time figuring out that you’re even talking to Google, let alone with whom. Even the fact that you’re using chat is obscured.

So give Xabber a try. Check out Orbot, the effortless way to run it over Tor. And look into alternatives to cloud providers for everything you do.

Posted at 08:50:47 GMT-0700

Category: FreeBSDSelf-publishingTechnology

Overthrow the Cert Mafia!

Friday, January 4, 2013 

The certificate system is badly broken on a couple of levels and the most recent revelation that Turktrust accidentally issued two intermediate SSL CAs which enabled the recipients to issue presumptively valid arbitrary certificates. This is just the most recent (probably the most recent, this seems to happen a lot) compromise in a disastrously flawed system including the recent Diginotar and Comodo attacks. There are 650 root CAs that can issue certs, including some CA‘s operated by governments with potentially conflicting political interests or poor human rights records and your browser probably trusts most or all completely by default.

It is useful to think about what we use SSL certs for:

  • Establishing an encrypted link between our network client and a remote server to foil eavesdropping and surveillance.
  • To verify that the remote server is who we believe it to be.

Encryption is by far the most important, so much more important than verification that verification is almost irrelevant, and fundamental flaws with verification in the current CA system make even trying to enforce verification almost pointless. Most users have no idea what what any of the cryptic (no pun intended) and increasingly annoying alerts warning of “unvalidated certs” mean or even what SSL is.

Google recently started rejecting self-signed certs when attempting to establish an SSL encrypted POP connection via Gmail, an idiotically counterproductive move that will only make the internet less secure by forcing individual mail servers to connect unencrypted. And this is from the company who’s cert management between their round-robin servers is a total nightmare and there’s no practical way to ever be sure if a connection has been MITMed or not as certs come randomly from any number of registrars and change constantly.
cert_stupidity_google_perspectives.JPG
What I find most annoying is that the extraordinary protective value of SSL encrypted communication is systematically undermined by browsers like Firefox in an intrinsically useless effort to convince users to care about verification. I have never, not once, ever not clicked through SSL warnings. And even though I often access web sites from areas that are suspected of occasionally attempting to infiltrate dissident organizations with MITM attacks, I still have yet to see a legit MITM attack in the wild myself. But I do know for sure that without SSL encryption my passwords would be compromised. Encryption really matters and is really important to keeping communication secure; anything that adds friction to encryption should be rejected. Verification would be nice if it worked, but don’t add friction to encryption.

no secure encryption unless you pay the cert mafia

Self-signed certs and community verified certs (like CAcert.org) should be accepted without any warnings that might slow down a user at all so that all websites, even non-commercial or personal ones, have as little disincentive to adding encryption as possible. HTTPSEverywhere, damnit. Routers should be configured to block non-SSL traffic (and HTML email, but that’s another rant. Get off my lawn.)

Verification is unsolvable with SSL certs for a couple of reason, some due to the current model, some due to reasonable human behavior, some due to relatively legitimate law-enforcement concerns, but mostly because absolute remote verification is probably an intractable problem.

Akamai certs error har har.JPG

Even at a well run notary, human error is likely to occur. A simple typo can, because registrar certs are by default trusted globally, compromise anyone in the world. One simple mistake and everybody is at risk. Pinning does not actually reduce this risk as breaks have so far been from generally well regarded notaries, though rapid response to discovered breaches can limit the damage. Tools like Convergence, Perspectives, and CrossBear could mitigate the problem, but only if they have sufficiently few false positives that people pay attention to the warnings and are built in by default.

But even if issuance were somehow fixed with teams of on-the-ground inspectors and biometrics and colonoscopies, it wouldn’t necessarily help. Most people would happily click through to www.bankomerica.com without thinking twice. Indeed, as companies may have purchased almost every spelling variation and point them all toward their “most reasonable” domain name, it isn’t unreasonable to do so. If bankomerica.com asked for a cert in Ubeki-beki-beki-stan-stan, would they (or even should they) be denied? No – valid green bar, invalid site. Even if misdirections were non-SSL encrypted, it isn’t practical to typo-test every legit URL against every possible fake, and the vast majority of users would never notice if their usual bank site came up unencrypted one day with a DNS attack to a site not even pretending to fake a cert (in fact, studies suggest that no users would notice). This user limitation fundamentally obviates the value of certs for identifying sites. But even a typo-misdirection is assuming too much of the user – all of my phishing spam uses brand names in anchortext leading to completely random URLs, rarely even reflective of the cover story, and the volume of such spam suggests this is a perfectly viable attack. Verification attacks don’t even need to go to a vaguely similar domain let alone go to all the trouble of attacking SSL.

cert_stupidity_google.JPG

One would hope that dissidents or political activists in democracy challenged environments that may be subject to MITM attacks might actually pay attention to cert errors or use perspectives, convergence, or crossbear. User education should help, but in the end you can’t really solve the stupid user problem with technology. If people will send bank details to Nigeria so that a nationality abandoned astronaut can expatriate his back pay, there is no way to educate them on the difference between https://www.bankofamerica.com and http://www.bankomerica.com. The only useful path is to SSL encrypt all sites and try to verify them via a distributed trust mechanism as implemented by GPG (explicit chain of trust), Perspectives (wisdom of the masses), or Convergence (consensus of representatives); all of these seem infinitely more reliable than trusting any certificate registry, whether national or commercial and as a bonus they escape the cert mafia by obviating the need for a central authority and the overhead entailed; but this only works if these tools have more valid positives than false positives, which is currently far from the case.

cert_stupidity_google_cross_bear.JPG

Further, law enforcement makes plausible arguments for requiring invisible access to communication. Ignoring the problematic but understandable preference for push-button access without review and presuming that sufficient legal barriers are in place to ensure such capabilities protect the innocent and are only used for good, it is not rational to believe that law enforcement will elect to give up on demanding lawful intercept capabilities wherever possible. Such intercept is currently enabled by law enforcement certificates which permit authorized MITM attacks to capture encrypted data without tipping off the target of the investigation. Of course, if the US has the tool, every other country wants it too. Sooner or later, even with the best vetting, there is a regime change and control of such tools falls into nefarious hands (much like any data you entrust to a cloud service will sooner or later be sold off in an asset auction to whoever can scrape some residual value out of your data under whatever terms suit them, but that too is a different rant). Thus it is not reasonable for activists in democracy challenged environments to assume that SSL certs are a secure way to ensure their data is not being surveilled. Changing the model from intrinsic, automatic trust of authority to a web-of-trust model would substantially mitigate the risk of lawful intercept certs falling into the wrong hands, though also making such certs useless or far harder to implement.

There is no perfect answer to verification because remote authentication is Really Hard. You have to trust someone as a proxy and the current model is to trust all or most of the random, faceless, profit or nefarious motive driven certificate authorities. Where verification cannot be quickly made and is essential to security, out of band verification is the only effective mechanism such as transmitting a hash or fingerprint of the target’s cryptographic certificate via voice or postal mail or perhaps via public key cryptography.

Sadly, the effort to prop up SSL as a verification mechanism has been made at the compromise of widespread, low friction encryption. False security is being promoted at the expense of real security.

That’s just stupid.

Posted at 15:18:25 GMT-0700

Category: PrivacySecurityTechnology

28C3 Scariest Talk of the Day

Wednesday, December 28, 2011 

We attended Effective Denial of Service attacks against web application platforms by Alexander “alech” Klink and Julian | zeri where they described a really, really easy to implement denial of service attack that exploits an artifact of hash checking which is computationally intensive when the hash table is filled with hash collisions. It is fairly easy to find 2-4 character hash collisions for a given hash functions (and there are only a few variations in use) and as hash operations are performed by default on all POST and POST-like functions, which take (by default) from 2-8MB of data, one can easily tie up a computers CPU effectively indefinitely.

The researchers tested the attack on most web languages in use (and all in common use – only Perl is deployed safe (since 2003) and Ruby 1.9 has a patch available. Every other OS is vulnerable. Today. The attack is only a POST option with a table of delimited hash collision values. You could copypasta a working exploit, it is that easy. The vast (vaaast) majority of sites on the web run PHP, and 1 Gbps of attack vector bandwidth could take down 10,000 cores. With ASP.NET, that 1 Gbps can hold down 30,000 cores cRuby 1.8 (not patched, about half of Ruby installs): that 1 Gbps can keep a million cores tied up.

Yow.

Posted at 18:32:59 GMT-0700

Category: EventsTechnologyTravel