Overthrow the Cert Mafia!

Friday, January 4, 2013 

The certificate system is badly broken on a couple of levels and the most recent revelation that Turktrust accidentally issued two intermediate SSL CAs which enabled the recipients to issue presumptively valid arbitrary certificates. This is just the most recent (probably the most recent, this seems to happen a lot) compromise in a disastrously flawed system including the recent Diginotar and Comodo attacks. There are 650 root CAs that can issue certs, including some CA‘s operated by governments with potentially conflicting political interests or poor human rights records and your browser probably trusts most or all completely by default.

It is useful to think about what we use SSL certs for:

  • Establishing an encrypted link between our network client and a remote server to foil eavesdropping and surveillance.
  • To verify that the remote server is who we believe it to be.

Encryption is by far the most important, so much more important than verification that verification is almost irrelevant, and fundamental flaws with verification in the current CA system make even trying to enforce verification almost pointless. Most users have no idea what what any of the cryptic (no pun intended) and increasingly annoying alerts warning of “unvalidated certs” mean or even what SSL is.

Google recently started rejecting self-signed certs when attempting to establish an SSL encrypted POP connection via Gmail, an idiotically counterproductive move that will only make the internet less secure by forcing individual mail servers to connect unencrypted. And this is from the company who’s cert management between their round-robin servers is a total nightmare and there’s no practical way to ever be sure if a connection has been MITMed or not as certs come randomly from any number of registrars and change constantly.
cert_stupidity_google_perspectives.JPG
What I find most annoying is that the extraordinary protective value of SSL encrypted communication is systematically undermined by browsers like Firefox in an intrinsically useless effort to convince users to care about verification. I have never, not once, ever not clicked through SSL warnings. And even though I often access web sites from areas that are suspected of occasionally attempting to infiltrate dissident organizations with MITM attacks, I still have yet to see a legit MITM attack in the wild myself. But I do know for sure that without SSL encryption my passwords would be compromised. Encryption really matters and is really important to keeping communication secure; anything that adds friction to encryption should be rejected. Verification would be nice if it worked.

no secure encryption unless you pay the cert mafia

Self-signed certs and community verified certs (like CAcert.org) should be accepted without any warnings that might slow down a user at all so that all websites, even non-commercial or personal ones, have as little disincentive to adding encryption as possible. HTTPSEverywhere, damnit. Routers should be configured to block non-SSL traffic (and HTML email, but that’s another rant. Get off my lawn.)

Verification is unsolvable with SSL certs for a couple of reason, some due to the current model, some due to reasonable human behavior, some due to relatively legitimate law-enforcement concerns, but mostly because absolute remote verification is probably an intractable problem.

Akamai certs error har har.JPG

Even at a well run notary, human error is likely to occur. A simple typo can, because registrar certs are by default trusted globally, compromise anyone in the world. One simple mistake and everybody is at risk. Pinning does not actually reduce this risk as breaks have so far been from generally well regarded notaries, though rapid response to discovered breaches can limit the damage. Tools like Convergence, Perspectives, and CrossBear could mitigate the problem, but only if they have sufficiently few false positives that people pay attention to the warnings and are built in by default.

But even if issuance were somehow fixed with teams of on-the-ground inspectors and biometrics and colonoscopies, it wouldn’t necessarily help. Most people would happily click through to www.bankomerica.com without thinking twice. Indeed, as companies may have purchased almost every spelling variation and point them all toward their “most reasonable” domain name, it isn’t unreasonable to do so. If bankomerica.com asked for a cert in Ubeki-beki-beki-stan-stan, would they (or even should they) be denied? No – valid green bar, invalid site. Even if misdirections were non-SSL encrypted, it isn’t practical to typo-test every legit URL against every possible fake, and the vast majority of users would never notice if their usual bank site came up unencrypted one day with a DNS attack to a site not even pretending to fake a cert (in fact, studies suggest that no users would notice). This user limitation fundamentally obviates the value of certs for identifying sites. But even a typo-misdirection is assuming too much of the user – all of my phishing spam uses brand names in anchortext leading to completely random URLs, rarely even reflective of the cover story, and the volume of such spam suggests this is a perfectly viable attack. Verification attacks don’t even need to go to a vaguely similar domain let alone go to all the trouble of attacking SSL.

cert_stupidity_google.JPG

One would hope that dissidents or political activists in democracy challenged environments that may be subject to MITM attacks might actually pay attention to cert errors or use perspectives, convergence, or crossbear. User education should help, but in the end you can’t really solve the stupid user problem with technology. If people will send bank details to Nigeria so that a nationality abandoned astronaut can expatriate his back pay, there is no way to educate them on the difference between https://www.bankofamerica.com and http://www.bankomerica.com. The only useful path is to SSL encrypt all sites and try to verify them via a distributed trust mechanism as implemented by GPG (explicit chain of trust), Perspectives (wisdom of the masses), or Convergence (consensus of representatives); all of these seem infinitely more reliable than trusting any certificate registry, whether national or commercial and as a bonus they escape the cert mafia by obviating the need for a central authority and the overhead entailed; but this only works if these tools have more valid positives than false positives, which is currently far from the case.

cert_stupidity_google_cross_bear.JPG

Further, law enforcement makes plausible arguments for requiring invisible access to communication. Ignoring the problematic but understandable preference for push-button access without review and presuming that sufficient legal barriers are in place to ensure such capabilities protect the innocent and are only used for good, it is not rational to believe that law enforcement will elect to give up on demanding lawful intercept capabilities wherever possible. Such intercept is currently enabled by law enforcement certificates which permit authorized MITM attacks to capture encrypted data without tipping off the target of the investigation. Of course, if the US has the tool, every other country wants it too. Sooner or later, even with the best vetting, there is a regime change and control of such tools falls into nefarious hands (much like any data you entrust to a cloud service will sooner or later be sold off in an asset auction to whoever can scrape some residual value out of your data under whatever terms suit them, but that too is a different rant). Thus it is not reasonable for activists in democracy challenged environments to assume that SSL certs are a secure way to ensure their data is not being surveilled. Changing the model from intrinsic, automatic trust of authority to a web-of-trust model would substantially mitigate the risk of lawful intercept certs falling into the wrong hands, though also making such certs useless or far harder to implement.

There is no perfect answer to verification because remote authentication is Really Hard. You have to trust someone as a proxy and the current model is to trust all or most of the random, faceless, profit or nefarious motive driven certificate authorities. Where verification cannot be quickly made and is essential to security, out of band verification is the only effective mechanism such as transmitting a hash or fingerprint of the target’s cryptographic certificate via voice or postal mail or perhaps via public key cryptography.

Sadly, the effort to prop up SSL as a verification mechanism has been made at the compromise of widespread, low friction encryption. False security is being promoted at the expense of real security.

That’s just stupid.

Posted at 15:18:25 UTC

Category: technology

Superfish proves certs are useless for identification

Saturday, February 21, 2015 

Can we please, please stop with the stupid certificate verification warnings?

superfish logo

Dear security developers, your model is broken. It never worked. Stop warning people about certificate errors. Now. Forever.

Certificate errors serve two purposes:

  1. They make developers uncomfortable with using perfectly secure self-signed certs, and since commercial certs cost money, much of the web that could be encrypted remains unencrypted. That’s harm done to the public. Thanks.
  2. They happen so often, so relentlessly, for such trivial reasons (not even Google can keep their certs up to date) that users learn to ignore them, which makes an actual man-in-the-middle attack almost certain to succeed with most people, despite the warnings.

The Certificate Authority system is predicated on the idea that Certificate Authorities are flawless and trustworthy. They are neither. The Lenovo/Superfish problem shows another obvious flaw: hardware vendors (and actually any trusted software installer) has to be trustworthy too or client-side MITM is easy. And CA’s simply can’t verify against that.

This whole idiocy creates massive problems for something so basic as LAN administration. Even before wireless became pervasive, LAN coms should always be encrypted when passwords or any meaningful data is moving. Current security settings create a massive avalanche of useless errors for “untrustworthy certs” on one’s own network (the obvious fix is to automatically trust all certs on private networks, duh).

This is an issue that bothers me a lot. It gets in my way constantly and makes real security and encrypted communications way harder and way more complicated than it needs to be and the only beneficiaries at all are the certificate Mafiosi. This is just stupid. Superfish proves, again, how broken it is. Can we stop pretending now?

Also, this most recent of many certificate flaws comes with a bonus feature: the MITM cert Superfish uses is apparently really pathetically insecure, aside from using broken crypto, their software had their passwords in it, making it easy for crackers to develop tools to harvest additional data from the victims of the Superfish/Lenovo attack.It probably hurts more to find out your vendor hacked you, but the penalty is that the hack also destroyed the security of all of your communications. Thanks. This is why we can’t have nice things. It is also why any back door, no matter what the motive, compromises security.


 

Update: Superfish is, apparently, out of business.  While that sucks for the people at the company, who were probably very happy with their Lenovo OEM deal and instead got a big sock of coal, one might naively hope for an upside that companies considering a model based on stealing people’s data might take notice of the cautionary tale of superfish.

Unfortunately – that won’t happen, not in the current valley climate. While it is economically advantageous to hire cheap kids who have no life and will work long hours for meagre pay, they come with a downside: they are all ignorant idiots. I don’t mean they’re not smart or capable (though the smart barrel was long ago drained and the vast majority of brogrammers sauntering around SF really are stupid), rather that they are foolish as in the opposite of ‘wise.”  Wisdom comes from experience, and experience only comes with time, an immutable dimension.  This superfish debacle was only from Feb 2015, but this year’s batch of idiot brogrammers weren’t around to see it and as they gather in self-congratulatory clusters in posh, VC-funded collaborative spaces, company barrista-brewed latte in one hand and social-media-distraction feeding portable device in the other, they’ll be high-fiveing and fist-bumping the brilliance of their brand-new idea for getting around SSL so they can collect marketing data and better target advertising.  Yay.


 

How to fix Superfish:

Install Perspectives. And support them.

Also, this bugs the crap out of me:

Overthrow the Cert Mafia!

SSL for Authentication Sucks

Unbreaking Firefox SSL Behavior

The CA System is Intractably Broken

Posted at 02:45:41 UTC

Category: Securitytechnology

huh… MITM or switching mafia allegiances

Sunday, December 18, 2011 

Certs are so fail for authentication.

cert_patrol_Facebook_change.jpg

Posted at 21:34:27 UTC

Category: -

The CA System is Intractably Broken

Tuesday, July 21, 2015 

I’m dealing with the hassle of setting up certs for a new site over the last few days. It means using startcom’s certs because they’re pretty good (only one security breach) and they have a decently low-hassle free certificate that won’t trigger BS warnings in browsers marketing fake cert mafia placebo security products to unwitting users. (And the CTO answers email within minutes well past midnight.)

And in the middle of this, news of another breach to the CA system was announced on the heels of Lenovo’s SuperFish SSL crack, this time a class break that resulted in a Chinese company being able to generate the equivalent of a lawful intercept cert and provided it to a private company. Official lawful intercept certificates are a globally used tool to silently crack SSL so official governments can monitor SSL encrypted traffic in compliance with national laws like the US’s CALEA.

But this time, it went to a private company and they were using it to intercept and crack Google traffic, and Google found out. The absurdity is to presume that this is an infrequent event. Such breaches (and a “breach” isn’t a lawful intercept tool, which are in constant and widespread use globally, but such a tool in the “wrong” hands) happen regularly. There’s no data on the ratio of discovered breaches to undiscovered breaches, of course. While it is possible that they are always found, seemingly accidental discoveries suggest far wider misuse than generally acknowledged.

The cert mafia should be abolished. Certificate authorities work for authoritarian environments in which a single entity is trusted by fiat as in a dictatorship or a company. The public should trust public opinion and a tool like Perspectives would end these problems as well as significantly lower the barrier to a fully encrypted web as those of us trying to protect our traffic wouldn’t need to choose between forking over cash to the cert mafia for fake security or making our users jump through scary security messages and complex work-arounds.

Posted at 00:53:59 UTC

Category: FreeBSDPrivacySecuritytechnology

Making Chrome Less Horrible

Saturday, June 13, 2015 

Google’s Chrome is  a useful tool to have around, but the security features have gotten out of hand and make it increasingly useless for real work without actually improving security.

After a brief rant about SSL, there’s a quick solution at the bottom of this post.


 

Chrome’s Idiotic SSL Handling Model

I don’t like Chrome nearly as much as Firefox,  but it does do some things better (I have a persistent annoyance with pfSense certificates that cause slow loading of the pfSense management page in FF, for example). Lately I’ve found that the Google+ script seems to kill firefox, so I use Chrome for logged-in Google activities.

But Chrome’s handling of certificates is abhorrent.  I’ve never seen anything so resolutely destructive to security and utility.  It is the most ill-considered, poorly implemented, counter-productive failure in UI design and security policy I’ve ever encountered.  It is hateful and obscene.  A disaster.  An abomination. The ill-conceived excrement of ignorant twits.  I’d be happy to share my unrestrained feelings privately.

It is a private network, you idiots

I’ve discussed the problem before, but the basic issues are that:

  • The certificate authority is NOT INVALID, Chrome just doesn’t recognize it because it is self-signed.  There is a difference, dimwits.
  • This is a private network (10.x.x.x or 192.168.x.x) and if you pulled your head out for a second and thought about it, white-listing private networks is obvious.  Why on earth would anyone pay the cert mafia for a private cert?  Every web-interfaced appliance in existence automatically generates a self-signed cert, and Chrome flags every one of them as a security risk INCORRECTLY.
  • A “valid” certificate merely means that one of the zillions of cert mafia organizations ripping people off by pretending to offer security has “verified” the “ownership” of a site before taking their money and issuing a certificate that placates browsers
  • Or a compromised certificate is being used.
  • Or a law enforcement certificate is being used.
  • Or the site has been hacked by criminals or some country’s law enforcement.
  • etc.

A “valid” certificate doesn’t mean nothing at all, but close to it.

So one might think it is harmless security theater, like a TSA checkpoint: it does no real harm and may have some deterrent value.  It is a necessary fiction to ensure people feel safe doing commerce on the internet.  If a few percent of people are reassured by firm warnings and are thus seduced into consummating their shopping carts, improving ad traffic quality and thus ensuring Google’s ad revenue continues to flow, ensuring their servers continue sucking up our data, what’s the harm?

The harm is that it makes it hard to secure a website.  SSL does two things: it pretends to verify that the website you connect to is the one you intended to connect to (but it does not do this) and it does actually serve to encrypt data between the browser and the server, making eavesdropping very difficult.  The latter useful function does not require verifying who owns the server, which can only be done with a web of trust model like perspectives or with centralized, authoritarian certificate management.

How to fix Chrome:

The damage is done. Millions of websites that could be encrypted are not because idiots writing browsers have made it very difficult for users to override inane, inaccurate, misleading browser warnings.  However, if you’re reading this, you can reduce the headache with a simple step (Thanks!):

Right click on the shortcut you use to launch Chrome and modify the launch command by adding the following “--ignore-certificate-errors

Unfuck chrome a bit.

Once you’ve done this, chrome will open with a warning:

zomg: ignore certificate errors?  who doesn't anyway?

YAY.  Suffer my ass.

Java?  What happened to Java?

Bonus rant

Java sucks so bad.  It is the second worst abomination loosed on the internet, yet lots of systems use it for useful features, or try to.  There’s endless compatibility problems with JVM versions and there’s the absolutely idiotic horror of the recent security requirement that disables setting “medium” security completely no matter how hard you want to override it, which means you can’t ever update past JVM 7.  Ever.  Because 8 is utterly useless because they broke it completely thinking they’d protect you from man in the middle attacks on your own LAN.

However, even if you have frozen with the last moderately usable version of Java, you’ll find that since Chrome 42 (yeah, the 42nd major release of chrome. That numbering scheme is another frustratingly stupid move, but anyway, get off my lawn) Java just doesn’t run in chrome.  WTF?

Turns out Google, happy enough to push their own crappy products like Google+, won’t support Oracle’s crappy product any more.  As of 42 Java is disabled by default.  Apparently, after 45 it won’t ever work again.  I’d be happy to see Java die, but I have a lot of infrastructure that requires Java for KVM connections, camera management, and other equipment that foolishly embraced that horrible standard.  Anyhow, you can fix it until 45 comes along…

To enable Java in Chrome for a little while longer, you can follow these instructions to enable NPAPI (which enables Java).  Type “chrome://flags/#enable-npapi” in the browser bar and click “enable.”

Enable NAPI

Posted at 13:24:37 UTC

Category: HowToSecuritytechnology

Sony-style Attacks and eMail Encryption

Friday, December 19, 2014 

Some of the summaries of the Sony attacks are a little despairing of the viability of internet security, for example Schneier:

This could be any of us. We have no choice but to entrust companies with our intimate conversations: on email, on Facebook, by text and so on. We have no choice but to entrust the retailers that we use with our financial details. And we have little choice but to use butt services such as iButt and Google Docs.

I respectfully disagree with some of the nihilism here: you do not need to put your data in the butt. Butt services are “free,” but only because you’re the product.  If you think you have nothing to hide and privacy is dead and irrelevant, you are both failing to keep up with the news and extremely unimaginative. You think you have no enemies?  Nobody would do you wrong for the lulz?  Nobody who would exploit information leaks for social engineering to rip you off?

Use butt services only when the function the service provides is predicated on a network effect (like Facebook) or simply can’t be replicated with individual scale resources (Google Search).  Individuals can reduce the risk of being a collateral target by setting up their own services like an email server, web server, chat server, file server, drop-box style server, etc. on their own hardware with minimal expertise (and the internet is actually full of really good and expert help if you make an honest attempt to try), or use a local ISP instead of relying on a global giant that is a global target.

Email Can be Both Secure AND Convenient:

But there’s something this Sony attack has made even more plain: eMail security is bad.  Not every company uses the least insecure email system possible and basically invites hackers to a data smorgasborg like Sony did by using outlook (I mean seriously, they can’t afford an IT guy who’s expertise extends beyond point-n-click?  Though frankly the most disappointing deployment of outlook is by MIT’s IT staff.  WTF?).

As lame as that is, email systems in general suffer from an easily remediated flaw: email is stored on the server in plain text which means that as soon as someone gets access to the email server, which is by necessity of function always globally network accessible, all historical mail is there for the taking.

Companies institute deletion policies where exposed correspondence is minimized by auto-deleting mail after a relatively short period, typically about as short as possible while still, more or less, enabling people to do their jobs.  This forced amnesia is a somewhat pathetic and destructive solution to what is otherwise an excellent historical resource: it is as useful to the employees as to hackers to have access to historical records and forced deletion is no more than self-mutilation to become a less attractive target.

It is trivial to create a much more secure environment with no meaningful loss of utility with just a few simple steps.

Proposal to Encrypt eMail at Rest:

I wrote in detail about this recently.  I realize it is a TLDR article, but as everyone’s wound up about Sony, a summary might serve as a lead-in for the more actively procrastinating. With a few very simple fixes to email clients (which could be implemented with a plug-in) and to email servers (which can be implemented via mail scripting like procmail or amavis), email servers can be genuinely secure against data theft.  These fixes don’t exist yet, but the two critical but trivial changes are:

Step One: Server Fix

  • Your mail server will have your public key on it (which is not a security risk) and use it to encrypt every message before delivering it to your mailbox if it didn’t come in already encrypted.

This means all the mail on the sever is encrypted as soon as it arrives and if someone hacks in, the store of messages is unreadable.  Maybe a clever hacker can install a program to exfiltrate incoming messages before they get encrypted, but doing this without being detected is very difficult and time consuming.  Grabbing an .ost file off some lame Windows server is trivial. I don’t mean to engage in victim blaming, but seriously, if you don’t want to get hacked, don’t go out wearing Microsoft.

Encrypting all mail on arrival is great security, but it also means that your inbox is encrypted and as current email clients decrypt your mail for viewing, but then “forget” the decrypted contents, encrypted messages are slower to view than unencrypted ones and, most crippling of all, you can’t search your encrypted mail.  This makes encrypted mail unusable, which is why nobody uses it after decades. This unusability is a tragic and pointless design flaw that originated to mitigate what was then, apparently, a sore spot with one of Phil’s friends who’s wife had read his correspondence with another woman and divorce ensued; protecting the contents of email from client-side snooping has ever since been perceived as critical.1I remember this anecdote from an early 1990’s version of PGP.  I may be mis-remembering it as the closest reference I can find is this FAQ:

It was a well-intentioned design constraint and has become a core canon of the GPG community, but is wrong-headed on multiple counts:

  1. An intimate partner is unlikely to need the contents of the messages to reach sufficient confidence in distrust: the presence of encrypted messages from a suspected paramour would be more than sufficient cause for a confrontation.
  2. It breaks far more frequent use such as business correspondence where operational efficiency is entirely predicated on content search which doesn’t work when the contents are encrypted.
  3. Most email compromises happen at the server, not at the client.
  4. Everyone seems to trust butt companies to keep their affairs private, much to the never-ending lulz of such companies.
  5. Substantive classes of client compromises, particularly targeted ones, capture keystrokes from the client, meaning if the legitimate user has access to the content of the messages, so too does the hacker, so the inconvenience of locally encrypted mail stores gains almost nothing.
  6. Server attacks are invisible to most users and most users can’t do anything about them.  Users, like Sony’s employees, are passive victims of sysadmin failures. Client security failures are the user’s own damn fault and the user can do something about them like encrypting the local storage of their device which protects their email and all their other sensitive and critical selfies, sexts, purchase records, and business correspondence at the same time.
  7. If you’re personally targeted at the client side, that some of your messages are encrypted provides very little additional security: the attacker will merely force you to reveal the keys.

Step Two: Client Fix

  • Your mail clients will decrypt your mail automatically and create local stores of unencrypted messages on your local devices.

If you’ve used GPG, you probably can’t access any mail you got more than a few days ago; it is dead to you because it is encrypted.  I’ve said before this makes it as useless as an ephemeral key encrypted chat but without the security of an ephemeral key in the event somebody is willing to force you to reveal your key and is interested enough to go through your encrypted data looking for something.  They’ll get it if they want it that bad, but you won’t be bothered.

But by storing mail decrypted locally and by decrypting mail as it is downloaded from the server, the user gets the benefit of “end-to-end encryption” without any of the hassles.

GPG-encrypted mail would work a lot more like an OTR encrypted chat.  You don’t get a message from OTR that reads “This chat message is encrypted, do you want to decrypt it?  Enter your password” every time you get a new chat, nor does the thread get re-encrypted as soon as you type something, requiring you to reenter your key to review any previous chat message.  That’d be idiotic.  But that’s what email does now.

Adoption Matters

These two simple changes would mean that server-side mail stores are secure, but just as easy to use and as accessible to clients as they are now.  Your local device security, as it is now, would be up to you.  You should encrypt your hard disk and use strong passwords because sooner or later your personal device will be lost or stolen and you don’t want all that stuff published all over the internet, whether it comes from your mail folder or your DCIM folder.

It doesn’t solve a targeted attack against your local device, but you’ll always be vulnerable to that and pretending that storing your encrypted email on your encrypted device in an encrypted form adds security is false security that has the unfortunate side effect of reducing usability and thus retarding adoption of real security.

If we did this, all of our email will be encrypted, which means there’s no additional hassle to getting mail that was encrypted with your GPG key by the sender (rather than on the server).  The way it works now, GPG is annoying enough to warrant asking people not to send encrypted mail unless they have to, which tags that mail as worth encrypting to anyone who cares.  By eliminating the disincentive, universally end-to-end encrypted email would become possible.

A few other minor enhancements that would help to really make end-to-end, universally encrypted email the norm include:

  • Update mail clients to prompt for key generation along with any new account (the only required option would be a password, which should be different from the server-log-in password since a hash of that has to be on the server and a hash crack of the account password would then permit decryption of the mail there, so UX programmers take note!)
  • Update address books, vcard, and LDAP servers so they expect a public key for each correspondent and complain if one isn’t provided or can’t be found.  An email address without a corresponding key should be flagged as problematic.
  • Corporate and hierarchical organizations should use a certificate authority-based key certification system, everyone else should use web-of-trust/perspectives style key verification, which can be easily automated to significantly reduce the risk of MitM attacks.

This is easy. It should have been done a long time ago.

 

Footnotes   [ + ]

1. I remember this anecdote from an early 1990’s version of PGP.  I may be mis-remembering it as the closest reference I can find is this FAQ:
Posted at 16:21:29 UTC

Category: FreeBSDPrivacySecuritytechnology

SSL for Authentication Sucks

Wednesday, November 26, 2014 

One of the most horrible mistakes made in the early days of the internet was to use SSL (an “HTTPS” connection) for both securing a connection with encryption and verifying that the server you reach matches the URL you entered.

Encryption is necessary so you can’t be spied on by anyone running wireshark on the same hotspot you’re on, something that happens all the time, every day, to everyone connecting to public wifi, which means just about everyone just about any time they take a wifi device out of the house.  It is pretty certain that you – you yourself – have thwarted cybercrime attempts thanks to SSL, not just once but perhaps dozens of times a day, depending on how often you go to Starbucks.

The second purpose, attempting to guarantee that the website you reached is served by the owner of the domain name as verified by some random company you’ve never heard of is an attempt to thwart so-called “Man in the Middle” (MITM) and DNS poisoning attacks.  While these are also fairly easy (especially the latter), they’re both relatively uncommon and the “fix” doesn’t work anyway.

In practice, the “fix” can be detrimental because it gives a false sense of security to that sliver of the population that knows enough to be aware that the browser bar ever shows a green lock or any other indicator of browser trust and not aware enough to realize that the indicator is a lie. It is beyond idiotic that our browsers make a big show of this charade of identity verification with great colorful warnings of non-compliance whenever detected to order to force everyone to pay off the cert mafia and join in the protection racket of pretending that their sites are verified.

I’ve written before why this is counterproductive, but the basic problems is that browsers ship with a set of “root” certificates1You can review a list of the certificates of trusted Certificate Authorities here. Note that the list includes state-agency certificates from countries with controversial human rights records. that they trust for no good reason at all except that there’s a massive payola racket and if you’re a certificate issuer with a distributed accepted CA certificate you can print money by charging people absurd fees for executing a script on your server which, at zero cost to the operator, “signs” their certificate request (oh please, please great cert authority sign my request) so that browsers will accept it without warning.  It isn’t like they actually have the owner of the site come in to their office, show ID, and verify they are who they say they are.  Nobody does that except CACert; which is a free service and, surprise, their root cert is not included in any shipping browser.

Users then will typically “trust” that the site they’re connecting to is actually the one they expected when they typed in a URL.  Except they didn’t type a URL, they clicked on a link and they really have no idea where there browser is going and will not read the URL in the browser bar anyway and bankomurica.com is just as valid as bankofamerica.com, so the typical user has no clue where the browser thinks it is going and a perfectly legit, valid cert can be presented for a confusing (or not really so much) URL.  Typosquatters and pranksters have exploited this very successfully and have proven beyond any doubt that pretending that a URL is an unambiguous identifier is foolish and so too, therefore, is proving that the connection between the browser and the URL hasn’t been hijacked.

Further, law enforcement in most countries require that service providers ensure that it is possible to surreptitiously intercept communications on the web: that is do the exact thing we’re sold that a “valid” certificate makes “impossible.” In practice they get what are called “lawful intercept” certificates which are a bit like fireman’s key that doesn’t compromise your security because only a fireman would ever, ever have one..  Countries change hands and so do these.  If you think you’re a state-level target and certificate signing has any value, you’re actually putting your life at risk.  This is an immense disservice because there will be some people at risk, under surveillance, who will actually pay attention to the green bar and think it means they are safe.  It does not.  They may die.  Really.

Commercial certs can cost thousands of dollars a year and they provide absolutely zero value to the site visitor except making the browser warnings go away so they can visit the site without dismissing meaningless and annoying warnings.  There is absolutely no additional value to the site operator for a commercial cert over a completely free self-signed cert except to make the browser warnings go away for their visitors.  The only entity that benefits from this is the certificate vendor from the fees they charge site operators and for the browser vendor for whatever fees are associated with including their certificates in the browser installer.  You, the internet user, just lose out because small sites don’t use encryption because they can’t afford certs or the hassle and so your security is compromised to make other people rich.

There are far better tools2The hierarchical security model that browsers currently use, referencing a certificate authority, does work well for top-down organizations like companies or the military (oddly, the US Military’s root certificates aren’t included in browsers).  In such a situation, it makes sense for a central authority to dictate what sources are trusted.  It just does not make sense in an unstructured public environment where the “authority” is unknown and their vouch means nothing.  that use a “Web Of Trust” model that was pioneered by PGP back in the early 1990s that actually does have some meaning and is used by CACert, meaning CACert certificates actually have some meaning when they indicate that the site you’re visiting is the one indicated by the URL, but since CACert doesn’t charge and therefore can’t afford to buy into the cert mafia, their root certs are not included in browsers, so you have to install it yourself.

The result is that a small website operator has four options:

  • Give up on security and expose all the content that moves between their server and their visitors to anyone snooping or logging,
  • Use a self-signed cert3If you’re running your own web services, for example a web-interface to your wifi router or a server or some other device with a web interface, it will probably use a self-signed cert and you’ve probably gotten used to clicking through the warnings, which at least diminishes the blackmail value of the browser warnings as people get used to ignoring them.  Installing certificates in Firefox is pretty easy.  It is a major hassle in Chrome or IE (because Chrome, awesome work Google, great job, uses IE’s certificate store, at least on Windows). Self-signed certs are used everywhere in IT management, almost all web-interfaced equipment uses them.   IBM has a fairly concise description of how to install the certs.  Firefox wins.  to encrypt traffic that will generate all sorts of browser warnings for their visitors in an attempt to extort money from them,
  • Use one of the free SSL certificate services that become increasingly annoying to keep up to date and provide absolutely zero authentication value but will encrypt traffic without generating warnings,
  • Use CACert and ask users to be smart enough to install the CACert root certificate and thus actually encrypt and reasonably securely prove ownership.

And, of course, agitate for rationality: Perspectives and the CACert root should ship with every browser install.

Footnotes   [ + ]

1. You can review a list of the certificates of trusted Certificate Authorities here. Note that the list includes state-agency certificates from countries with controversial human rights records.
2. The hierarchical security model that browsers currently use, referencing a certificate authority, does work well for top-down organizations like companies or the military (oddly, the US Military’s root certificates aren’t included in browsers).  In such a situation, it makes sense for a central authority to dictate what sources are trusted.  It just does not make sense in an unstructured public environment where the “authority” is unknown and their vouch means nothing.
3. If you’re running your own web services, for example a web-interface to your wifi router or a server or some other device with a web interface, it will probably use a self-signed cert and you’ve probably gotten used to clicking through the warnings, which at least diminishes the blackmail value of the browser warnings as people get used to ignoring them.  Installing certificates in Firefox is pretty easy.  It is a major hassle in Chrome or IE (because Chrome, awesome work Google, great job, uses IE’s certificate store, at least on Windows). Self-signed certs are used everywhere in IT management, almost all web-interfaced equipment uses them.   IBM has a fairly concise description of how to install the certs.  Firefox wins.
Posted at 15:50:20 UTC

Category: technology

A week of tweets: 2010-06-06

Sunday, June 6, 2010 
Posted at 02:11:00 UTC

Category: Twitter

Unbreaking FireFox SSL Behavior

Sunday, January 24, 2010 

I used to love firefox, but then somebody decided that users were way too stupid to make it through web browsing without an endless parade of  warnings about SSL certs.  The premise seems to be that:

  • Valid certs are meaningful.
  • Self-Signed or expired certs are indicative of a problem.

Neither is true.

(To a statistical certainty.  Some user somewhere will be validly warned away from a phishing site someday.)

Valid certs mean next to nothing since the users that these warnings are targeted to (and me too) will never ever notice if they’re going to bankofamerica.com (or whatever BofA’s legitimate URL is) or bankomerica.com (assuming bankomerica isn’t a valid bank of america domain).  Thus bankomerica can dupe bankofamerica’s website and get a perfectly valid cert and if users were dumb enough to believe that a lack of warnings indicated validity as the huge scary warnings effectively convey, then they’d be easy prey.

The only valid purpose of SSL is to secure communication between a server and a client so you can check your web mail at a cafe without worrying about being snooped and a self-signed cert does that just as well as one issued by the cert mafia.  Sure, sure the giant cert authorities would love to take your $1,000 a year to give a your user’s some sort of guarantee that you’re really who you say you are, but that doesn’t make any difference at all in practice.

As for DNS hijacking so amazon.com goes to a spoof site where the transaction security is compromised (and in theory the self-signed cert would be a give-away) just mod-rewrite to http then redirect to amazoncheck0utservices.com and get a valid cert for it.

Besides, after users have been forced to dismiss a zillion intra-net “invalid” certs, they’ve learned to completely ignore the warnings and so automatically click through the scary and almost always pointless warnings FireFox generates. Or, like many people, users stop abandon the scary, irritating browser and go back to IE.  Win.  Oh wait… FAIL.

Secure DNSSEC is smart, but forget warning people into oblivion over self-signed certs, the net effect is to make the web less secure because site admins have to choose between absurd fees for certs or turning certs off.  Until FireFox fixes this counterproductive behavior, there are two things that help.  First, browse to about:config and set browser.ssl_override_behavior to “2”.

FIX SSL config in FireFox

I’ve also found the Persepectives Plugin useful to reduce the number of pointless and irritating error warnings Firefox generates when it sees a cert that hasn’t fully paid up the protection racket extortion fees by using a polling mechanism, effectively saying (to a collection of referee sites) “ya’ll think this cert is ok?” and if they say “yeah…” then you get no error.

Perspectives_plugin

There fixes are helpful for those of us sufficiently skilled to use them, but unfortunately they won’t prevent users abandoning the endlessly “WOLF!” crying FireFox for IE.

Posted at 19:18:52 UTC

Category: technology

Let’s Encrypt….

Sunday, December 10, 2017 

Let’s encrypt, why not?

Wanna know how I did it for FreeBSD/Apache/acme-client, jump below.

Let’s encrypt is a service from the fine people at Mozilla, who, when they’re not trying to prove that Firefox can be a Chrome clone, do some really good stuff. Certificates are what give you the little warm fuzzy feeling of a green lock icon, and when properly configured, avoid giving you that terrifying feeling that something horrible is about to happen if you visit a site with an expired one or a self-signed one.

There are some huge structural problems in the certificate concept that seem to exist only to validate the certificate mafia, that can charge $100s per year for a validated certificate, as if executing the script to issue one was somehow expensive. It is not, you can generate one yourself that provides exactly the same security as one provided by a big company that gets their root certs distributed in a browser, but browsers reject these with scary messages so webmasters have to keep buying them.

Now there’s a theory behind why they’re ripping you off: the premise is that the certificate verifies the site is who it says it is – that if you go to mybank.com, you’re actually visiting your real bank, not being redirected by a man-in-the-middle attack to some fake landing page to harvest your passwords, log into your account, and steal all your cats.  There are a few problems with this:

  • Nobody actually checks a URL so while a certificate sort of adds some weight to the probability that mybank.com is owned by mybank, not some hacker a few tables over ARP poisoning the cafe wifi, it doesn’t do anything if you click on a link to mibank.com.
  • The companies that claim to check IDs and verify owners, do not.  That would cost money. You think they’re gonna actually do that?  No… (CAcert actually does, but they don’t get a root cert because… they do it for free. And don’t have Mozilla’s money and clout.)
  • Stealing a root cert private key can generate significant LOLZ; it happens a lot.
  • Law enforcement the world over has “lawful intercept” certs.  You’re probably on some country’s poop list if you have ever used social media. Their laws permit intercepting your communications.  Some country’s laws somewhere certainly do no matter who you are.
  • But dang, those annoying warnings that do nothing to secure you mean that people who publish a website just for the good of the planet either have to pay up, go through a lot of hassle, or leave their user’s content streams exposed to the world’s prying eyes…

…Until Let’s Encrypt came along.  It is a lovely little set of tools and services that not only issue browser-accepted certs (see the green lock?) but also automate renewal.  They basically check that you have enough control over your website to let a script write a file that that they can read back and verify, and if so, you’re who you say you are: the person with write access to the server powering the website they’re giving the certificate too.  That’s all anyone can really do, and is as secure as any other cert there is for identification of a site: that is except for stolen certs, url typos, law enforcement certs, or malicious code on your computer, if you visit https://blackrosetech.com and you don’t get any warnings, you’re probably reading data coming off my computer and not some hacker pretending to be me.

I got Let’s Encrypt to work, but it took some modifications of the existing guides, and I think the service is a good thing that more people should use, so in the spirit of investing some of my resources into the great shared experiment that is Open Source, here’s my How To:


Upstream Guides:

I found these two guides extremely helpful.

https://www.richardfassett.com/2017/01/16/using-lets-encrypt-with-acme-client-on-a-freebsd-11apache-2-4/

https://brnrd.eu/security/2016-12-30/acme-client.html

Step 1: Installing the certificate generation tool

There are a few different software tools to manage the Let’sEncrypt process.  I elected to use Kristaps Dzonsons acme-client, ported to FreeBSD by Bernard Spil.

I was using OpenSSL on my site.  Bernard and Kristaps have some strong opinions on OpenSSL and heartbleed and a few other problems and therefore require LibreSSL.   If you’re using it already, great.  If not, you’ll have to install it.  It wasn’t too terrible, but I ran into a few issues:

https://wiki.freebsd.org/LibreSSL
Or, easy peasy https://ootput.github.io/2016/07/20/Switching-to-LibreSSL/

# ee /etc/make.conf
set  DEFAULT_VERSIONS+= ssl=libressl
# portmaster -od security/libressl security/openssl
# portmaster -rd security/libressl 
or for a complete refresh
# portmaster -Rafd

Curl will probably fail with LibreSSL (and with the latest, if it has brotli support enabled).  Check the google to see if these fixes are still needed, or just:

# cd /usr/ports/ftp/curl
# make config

disable TLS-SRP  https://forums.freebsd.org/threads/56917/

ftp/curl 7.75.0 has an issue with pied piper brotli, which requires modifying the makefile to build --without-brotli as indicated in comment #2 

(Sunpoet, the curl port maintainer, got back to me with an update: when PR/223966 is integrated in Brotli, he will add an optional Brotli support flag and it should work fine at that point without the Makefile edit.)

Step 2: Actually installing acme-client

The really easy part: you should be able to

# portmaster security/acme-client

and be on your way to configuration heaven.

Step 3: Initial configuration

The defaults for acme-client expect certain directories to exist and the installer doesn’t create them.

# mkdir -pm750 /usr/local/www/.well-known && chown -R www:www /usr/local/www/.well-known
# mkdir -pm750 /usr/local/www/.well-known/acme-challenge && chown -R www:www /usr/local/www/.well-known/acme-challenge

The how-to’s seemed to forget the last one.

And make a modification to your httpd.conf file to permit the Let’s Encrypt servers to have access to these folders:

# ee /usr/local/etc/apache24/httpd.conf

add the following:

# Lets Encrypt challenge directory configured per 
# https://brnrd.eu/security/2016-12-30/acme-client.html
<Directory "/usr/local/www/.well-known/">
        Options None
        AllowOverride None
        Require all granted
        Header add Content-Type text/plain
</Directory>

And, for each VHOST that is going to get a cert:

# ee /usr/local/etc/apache24/extra/httpd-vhosts.conf

add to each non-ssl VHOST definition the following:

Alias /.well-known/ /usr/local/www/.well-known/

such that you end up with something like (yours may be different, especially watch out for BasicAuth or ModRewrite, addressed further down):

<VirtualHost IP.NU.MB.ER:80>
    ServerName domain.com
    ServerAdmin admin@domain.com
    DocumentRoot /usr/local/www/data-dist/domain-root
    ServerAlias *.domain.com www.domain.com
    Alias /.well-known/ /usr/local/www/.well-known/
    ErrorLog /var/log/domain-error_log
    CustomLog /var/log/domain-access_log combined
    ScriptAlias /cgi-prg /www/cgi-prg
</VirtualHost>

Don’t forget!

# apachectl restart

Step 4: First Try

At this point the system should be configured sufficiently to do a trial run with a single domain from the command line. Later on there are some scripts that will automate the process of both converting a large number of VHOSTed domains on a server to Let’s Encrypt and for maintaining them and getting email notifications if anything goes wrong in the, hopefully, fully automatic renewal process.

# acme-client -mvnNC /usr/local/www/.well-known/acme-challenge domain.com www.domain.com

This should create all the directories still needed and populate them, then check with the lets encrypt server and get a certificate and install it in the right place.  Inshalla.

If you get something like

acme-client: transfer buffer: [{ "type": "urn:acme:error:malformed","detail": "Provided agreement URL[https://letsencrypt.org/documents/LE-SA-v1.1.1-August-1-2016.pdf]does not match current agreement URL[https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf]","status": 400 }] (267 bytes)

That means the lets encrypt agreement has changed.  You can’t do much but write the port maintainer or wait for an update.  It will get fixed quickly and should only happen once a year.  I don’t think you’ll get it at all unless you’re unlucky enough to try to update when it is changing.  I was.

More likely you’ll get something like

acme-client: transfer buffer: [{ "type": "http-01", "status": "invalid", "error": { "type": "urn:acme:error:unauthorized", "detail": "Invalid response from http://www.domain.com/.well-known/acme-challenge/evReZz6s1uSVZbgEVdKkWElx_NHb3NmbbwGbADUwRtQ: (etc...)

This means there’s a problem accessing the /.well-known/ directory by the server.  There can be a lot of reasons for this:

  • You didn’t restart apache # apachectl restart
  • There was an error in the config file (look at the output of the restart) and therefore apache didn’t actually reaload with your new config.
  • DNS isn’t pointing where you think it is pointing.  Check with nslookup/whois to make sure.  Really.
  • You have the directories protected in some way – like with .htaccess.  (see below)

But if it goes well, you’ll get something like:

acme-client: /usr/local/etc/acme/domain.com/privkey.pem: account key exists (not creating)
acme-client: /usr/local/etc/ssl/acme/private/domain.com/privkey.pem: domain key exists (not creating)
acme-client: https://acme-v01.api.letsencrypt.org/directory: directories
acme-client: acme-v01.api.letsencrypt.org: DNS: 173.223.13.221
acme-client: acme-v01.api.letsencrypt.org: DNS: 2001:418:142b:290::3d5
acme-client: acme-v01.api.letsencrypt.org: DNS: 2001:418:142b:28d::3d5
acme-client: https://acme-v01.api.letsencrypt.org/acme/new-authz: req-auth: domain.com
acme-client: /usr/local/www/.well-known/acme-challenge/_ffVe6jHNHbIG1XKAeoqQmmtryWMGCKsfHIWWkl5lJw: created
acme-client: https://acme-v01.api.letsencrypt.org/acme/challenge/5HKzgB9diS5ecS6WbYJsHeEXsSWZeMhdYFmMfN9voHA/2673529867: challenge
acme-client: https://acme-v01.api.letsencrypt.org/acme/challenge/5HKzgB9diS5ecS6WbYJsHeEXsSWZeMhdYFmMfN9voHA/2673529867: status
acme-client: https://acme-v01.api.letsencrypt.org/acme/new-cert: certificate
acme-client: http://cert.int-x3.letsencrypt.org/: full chain
acme-client: cert.int-x3.letsencrypt.org: DNS: 184.23.159.176
acme-client: cert.int-x3.letsencrypt.org: DNS: 184.23.159.177
acme-client: cert.int-x3.letsencrypt.org: DNS: 2001:5a8:100::b817:9fb0
acme-client: cert.int-x3.letsencrypt.org: DNS: 2001:5a8:100::b817:9fb1
acme-client: /usr/local/etc/ssl/acme/domain.com/chain.pem: created
acme-client: /usr/local/etc/ssl/acme/domain.com/cert.pem: created
acme-client: /usr/local/etc/ssl/acme/domain.com/fullchain.pem: created

Yay, you’ve got certs!  Now update your vhosts file to point to the certs you just created.  You may need to add a 443 container or, if it exists, update it to point to the new certs and restart apache.

# ee /usr/local/etc/apache24/extra/httpd-vhosts.conf

<VirtualHost IP.NU.MB.ER:443>
      ServerName domain.com
      ServerAdmin admin@domain.com
      DocumentRoot /usr/local/www/domainroot
      ServerAlias domain.com sub.domain.com
      SSLCertificateFile /usr/local/etc/ssl/acme/domain.com/cert.pem
      SSLCertificateKeyFile /usr/local/etc/ssl/acme/private/domain.com/privkey.pem
      SSLCertificateChainFile /usr/local/etc/ssl/acme/domain.com/fullchain.pem
      Header set Strict-Transport-Security "max-age=31536000; includeSubDomains"
      ErrorLog /var/log/domain-error_log
      CustomLog /var/log/domain-access_log combined
</VirtualHost>

save and restart, look for any errors (typos on directory paths etc. will be detected and apache won’t restart, but be aware, it won’t quit either).

# apachectl restart
 Performing sanity check on apache24 configuration:
 Syntax OK
 Stopping apache24.
 Waiting for PIDS: 81160.
 Performing sanity check on apache24 configuration:
 Syntax OK
 Starting apache24.

Navigate to https://domain.com/ and check out your new green lock.  Check security and you should find:

W00T!

Acme-Client Options

# man acme-client has all the deets, but we’re using:

  • -m to append the domain name to paths, use this and always use it or never.
  • -v for verbose output so we can see what is going on.
  • -n to check if an account key exists and create if not (no reason to omit)
  • -N to check if a domain key exists and create if not (also no reason to omit)
  • -C to specify the path to the challenge dir.  These guides all assume a centralized challenge dir outside the main serving path, and to which we redirect via an alias directive.
  • -F which forces the recreation of certs even if they haven’t expired (this counts against your 10 per 3 hours limit)
  • -s which redirects the process to the Let’s Encrypt staging server, which has no volume limits but also doesn’t create certs browsers accept.  (Using this is fine, but requires cleanup to switch to the production server, see below)
  • -e which is used to add a SAN to the certificate.  Removing one is a bit more involved (see below).

Automating Registration

Lets say you have a lot of domains, you might want to automate the process.  I modified the renewal script to automate the registration process.  This saved some time, but one quirk is you can only register 10 domains (certificates, including SANs, basically 10 lines of the domains list) per 3 hours (they say-I found it takes more like 12 hours to be allowed to register more).

First create  a file with all the domains you want to register for a Let’s Encrypt certificate in the same format as the renewal script uses (it can be the same file, but I made it different as I was experimenting)

# ee /usr/local/etc/acme/newdomains.txt
domain.com www.domain.com
domain2.com www.domain2.com
domain3.com www.domain3.com 
(save)

# ee /usr/local/etc/acme/acme-client-bulk-add.sh

#!/bin/sh

###
#
# This script was adapted by Richard Fassett from letskencrypt.sh
# by Bernard Spil
# See https://brnrd.eu/security/2016-12-30/acme-client.html
#
# and updated again from richard fassett's script at
# https://www.richardfassett.com/2017/01/16/using-lets-encrypt-with-acme-client-on-a-freebsd-11apache-2-4/#comment-282
#
# this requires a file called /usr/local/etc/acme/newdomains.txt of the format
# domain.tld sub.domain.tld alt.domain.tld
# domain2.tld 
# domaind3.tld sub.domain3.tld 
# etc
#
# This should only be run to bulk-add domains.
###

# Define location of dirs and files
DOMAINSFILE="/usr/local/etc/acme/newdomains.txt"
CHALLENGEDIR="/usr/local/www/.well-known/acme-challenge"

# Loop through the newdomains.txt file with lines like
# example.org www.example.org img.example.org
cat ${DOMAINSFILE} | while read domain subdomains ; do

  # Create the cert directory with the command
  # acme-client -mvnNC /usr/local/www/.well-known/acme-challenge (domain subdomains)
  
  acme-client -mvnN -C "${CHALLENGEDIR}" ${domain} ${subdomains}

done

# chmod +x /usr/local/etc/acme/acme-client-bulk-add.sh

A few fixes/recoveries that might be useful at this point: add SAN, remove SAN, switch from staging to production Let’s Encrypt servers.

Automation can break things, you might find you adjusted a few domains incorrectly or want to add a SAN later.

If you need to redo a domain from scratch, for example if you use the “s” option which created a cert from the staging server that doesn’t have volume limits (maybe you’re testing a lot of domains or trying to debug a particularly tricky .htaccess or DNS condition) – you might create a domain with acme-client -mvnsNC /usr/local/www/.well-known/acme-challenge domain.com www.domain.com and then want to generate the production cert.  You also need to do this to remove a SAN.  If you try without deleting the directories, you’ll get something like unknown SAN entry. (You replace “domain.com” with your domain.)

# setenv DD domain.com
# rm -r /usr/local/etc/ssl/acme/private/$DD && rm -r /usr/local/etc/acme/$DD && rm -r /usr/local/etc/ssl/acme/$DD && acme-client -mvnFNC /usr/local/www/.well-known/acme-challenge $DD www.$DD

If you need to add a new SAN to an existing domain

acme-client -mvneFNC /usr/local/www/.well-known/acme-challenge domain.com www.domain.com newsub.domain.com

it is the -e that “extends” the certificate.

Step 5: Automating Renewal

You might notice that the duration of the certificate is rather short: 3 months.  You really don’t want to be responding to certificate expired errors every 3 months, so let’s automate the renewal process.  For this you can create two files and store them on your server.  One is the renewal script itself and the other is a list of domains to renew.  This assumes you have more than one domain.  If you only have one domain, this is a bit overkill, but it will work, so why not?  You might get more domains in the future.   Everyone does.

First create a file with your list of domains, call it something creative like “domains.txt”  This is really a certificate request list with the “primary” domain and Subject Alternative Names (SANs) each on a single line.  In theory the SANs can be all over the place and Let’s Encrypt allows up to 100 per certificate (quite a lot), so the implication of “domains.txt” naming is a bit inaccurate, but that’s what everyone is using so we won’t be contrary.  You have to make sure that all the subdomains resolve—the Let’s Encrypt servers are going to look them up via DNS and if there aren’t working entries, this will fail with one of the errors above.  Check first.  I have not tested whether, if for example, you own domain.com, domain.org, and domain.net and they all point to the same directory, you can use one cert with different TLDs (or domains) as SANs; you should be able to, but I didn’t try.

# ee /usr/local/etc/acme/domains.txt

domain.com www.domain.com sub.domain.com sub2.domain.com
domain.org www.domain.org
domain2.com www.domain2.com cats.domain2.com kittens.domain2.com

Now that you’ve saved that, the following script is adapted from a few at the references listed above and works on my server.  I made a few adjustments and corrections (there was a name change for acme-client which hasn’t quite propagated through all the HowTos yet).

# ee /usr/local/etc/acme/acme-client-update.sh

#!/bin/sh

###
#
# This script was adapted from letskencrypt.sh by Bernard Spil
# See https://brnrd.eu/security/2016-12-30/acme-client.html
# ... and further modified by David Gessel  
# This script will fail if the directories haven't been set up and
# domains in domain.txt have been successfully verified
#
###

# Define location of dirs and files
DOMAINSFILE="/usr/local/etc/acme/domains.txt"
CHALLENGEDIR="/usr/local/www/.well-known/acme-challenge"

# is changed to 1 if any domains expired and were renewed
CHECKEXPIRATION=0

# Loop through the domains.txt file with lines like
# example.org www.example.org img.example.org
cat ${DOMAINSFILE} | while read domain subdomains ; do

    # acme-client returns RC=2 when certificates 
    # weren't changed; use set +e to capture the return code
    set +e
    # Renew the key and certs if required
    acme-client -mvb -C "${CHALLENGEDIR}" ${domain} ${subdomains}
	RC=$?
	
   # now that we have the return code, set script to exit if 
   # nonzero is returned
   set -e

   # if anything is expired, we'll want to do something 
   # (e.g., restart HTTPS)
   if [ $RC -ne 2 ] ; then
        CHECKEXPIRATION=1
   fi
done

if [ "$CHECKEXPIRATION" -ne "0" ] ; then
        service apache24 restart
fi

# chmod +x /usr/local/etc/acme/acme-client-update.sh

This works quite well and will walk through your domains and renew as needed.

I have 36 domain/certificate lines in my “domains.txt” file and timing this script it takes 2.13 seconds to execute on my server.  There’s no real problem running it every night and if you have a lot of domains, you should remember you can only get 10 certs at a time and they won’t renew for about a week before expiry, a limitation I ran into in the bulk setup process.  You can spread your domain renewals out over the three months by force renewing blocks of them if you have more than about 60 per server.

You probably want to automate the process as a cron job. But before we do, lets address one more little problem: one of the shortcomings of the script process below is that the output messages of the script are output to stdout and only cron’s stderr is emailed to the admin. If your shell environment is wrong or the path to the script is wrong, cron will tell you, but if your domains don’t resolve or the script can’t reach /.well-known/, you will not get any warnings. That’s might be a bummer. So I redirect the output of the client-update.sh script to a log file. It gets overwritten with each execution, so it doesn’t need to be rotated – it is just the output of the last execution. It should be filled with lines including “adding SAN” (which it tells you for each domain) and “certificate valid” which it tells you for each cert that doesn’t need to be renewed. But it might tell you something else, like it barfed trying to reach the /.well-known/ directory because, say, you messed around with .htaccess or forgot to renew your domain and it is being redirected to parking or something. The following script first checks to see if there are any lines in /var/log/lets-encrypt-renew other than the expected, and if so, emails just those lines. You shouldn’t get anything until renewal time or if there’s an error. If you don’t care about renewal notices, you can edit the script to ignore those too.

# ee /usr/local/etc/acme/acme-client-errors.sh

#!/bin/sh

###
# this script scans the log file created by the renewal execution cron job
# then removes any lines containing "adding SAN" or "certificate valid", which
# are normal messages, and mails whatever is left over using the "mail" command
# check full paths (or use relative) but full paths can avoid some errors
# use "# which grep" and "# which mail" on your system to check.

PROBLEM=0

/usr/bin/grep -v "adding SAN" /var/log/lets-encrypt-renew | \
/usr/bin/grep -v "certificate valid" | /usr/bin/cat | \
{ while read status
  do
       PROBLEM=1
  done

  if [ "$PROBLEM" -ne "0" ] ; then
        /usr/bin/grep -v "adding SAN" /var/log/lets-encrypt-renew | \
        /usr/bin/grep -v "certificate valid" | \
        /usr/bin/mail -s "Lets Encrypt Errors" gessel@blackrosetech.com $1
  fi
}

# chmod +x /usr/local/etc/acme/acme-client-errors.sh

My cron configuration is set up as

# crontab -e

#*     *     *   *    *        command to be executed
#-     -     -   -    -
#|     |     |   |    |
#|     |     |   |    +----- day of week (0 - 6) (Sunday=0)
#|     |     |   +------- month (1 - 12)
#|     |     +--------- day of        month (1 - 31)
#|     +----------- hour (0 - 23)
#+------------- min (0 - 59)

MAILTO=gessel
# expanded path
PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/games:/usr/local/sbin:/usr/local/bin
SHELL=/bin/csh
# Let's Encrypt renewal check
*       3       *       *       *        /usr/local/etc/acme/acme-client-update.sh > & /var/log/lets-encrypt-renew
*       4       *       *       *        /usr/local/etc/acme/acme-client-errors.sh

Note that this requires that mail works. On servers that aren’t serving email, I use SSMTP and configured it more or less following this guide https://www.freebsd.org/doc/handbook/outgoing-only.html and https://www.davd.eu/freebsd-send-mails-over-an-external-smtp-server/ and this https://www.debarbora.com/freebsd-10-1-setup-ssmtp-for-outgoing-mail/ especially the tip about using # chpass to change the default Full Name for root from “Charlie &” to something useful like “ServerName Root.”

You can test the mail function by adding a random word (or domain) to your domains.txt file and then executing

# /usr/local/etc/acme/acme-client-update.sh > & /var/log/lets-encrypt-renew
# /usr/local/etc/acme/acme-client-errors.sh

If everything is set up right, you’ll get an email complaining about your random word not being valid.  If you restore the correct domains.txt file and execute the above two commands you should not get an email at all.

# more /var/log/lets-encrypt-renew

should show only lines with “adding SAN” and “certificate valid” in them. If you execute # /usr/local/etc/acme/acme-client-errors.sh you shouldn’t get any message.

.htaccess Problems

If you’re controlling access to a directory or have some non-HTML style process listening, you might run into challenges giving the Let’s Encrypt server access to the /.well-known/ directory.  I found the following formulation worked:

AuthType Basic
     AuthName "Please login."
     AuthUserFile "/xxx/.htpasswd"
     # the directive below also "requires" that the requested URL include /.well-known/
        Require expr %{REQUEST_URI} =~ m#^/.well-known/.*#
        Require valid-user

Basically the script above allows (requires) a “valid-user” (one with an entry in the AuthUserFile and valid matching password) and also requires (allows) a URL that is going to /.well-known/ and subdirectories thereof.  This also works in /usr/local/etc/apache24/httpd.conf and /usr/local/etc/apache24/extra/httpd-vhosts.conf

modRewrite to HTTPS problems

You can also create problems by rewriting to HTTPS.  You might want to do this now that you have certs that will auto-renew and you can provide a secure experience for everyone.   In order to get to the /.well-known/ directory, you have to add an exception to the mod-write rule for traffic to this subdirectory like so:

	RewriteEngine on
           RewriteCond %{REQUEST_URI} !^/\.well\-known/acme\-challenge/
	RewriteCond %{HTTPS} off
            RewriteRule (.*) https://%{SERVER_NAME}/$1 [R=301,L]

Also, if you redirect on a 404, some formulations cause problems. This one does not seem to:

ErrorDocument 404 /index.php


Posted at 06:43:58 UTC

Category: FreeBSDHowToSecuritytechnology