Security

On security matters, mostly digital.

AI PSYOPS are changing strategic messaging

Saturday, July 29, 2023 

Social media fundamentally changed strategic messaging, cutting the cost per effect by at least two orders of magnitude, probably more.  It has become the most cost effective munition in the global arsenal. Even when it took teams of actual humans to populate content and troll farms to flood social media with messaging intended to result in a desired outcome, for example to swing an election, start a war, damage alliances, break treaties, or generate support for one particular policy, foreign or domestic, or another, it was still a revolution in reduced cost warfare.

Take Operation INFEKTION, the active measure campaign run by the KGB starting in about 1983″to create a favorable opinion for us abroad that this disease (AIDS) is the result of secret experiments with a new type of biological weapon by the secret services of the USA and the Pentagon that spun out of control.”

This campaign leveraged assets put in place as far back as 1962 and eventually consumed the authority of Prof. Jakob Segal as a self-referential authoritative citation.  After a little more than a decade of relentless media placements of strategic messaging, even in the United States more than 25% of the population had been convinced AIDS was a government project and 12% had been manipulated into believing it was created and spread by the CIA. This project was tremendously successful despite having to overcome the then standard and generally principled editorial gate keeping that protected “traditional” media from abuse and cooptation by manufacturing plausible chains of authority and fabricating deep and broad reference chains to thwart fact checking.

By the 2016 Election, the KGB’s successors, the IRA and GRU, efficiently and expertly leveraged social media to achieve even more impressive results, possibly winning the most significant military battle in history, to alter the outcome of the US election at a cost of only a few billion dollars and within a mere 2-3 years of effort.

Any-to-any publishing circumvents editorial protections (he writes without a trace of irony).  What might otherwise be a limitation of psyop being clearly outside any authorative endorsement, something that required the consumption of an asset like Jakob Segal to achieve in an earlier era, has been overwhelmingly diminished by a parallel effort to destroy trust in institutions and authority creating a direct path to shape the beliefs of targets through mass individualization of messaging unchecked by any need for longitudinal reputation building.

That the 2016 effort still cost billions, requiring a massive capacity build of English speaking, internet savvy teams inducted into “troll farms,” (many ironically located in Bulgaria given that county’s role in Operation INFEKTION) may already be obsolete just 8 years later.

Many have written about ChatGPT representing some sort of existential risk to humanity’s future, some quick resolution to the Fermi Paradox, but the real risk is an acceleration of the destruction of objective truth and the substitution conceptual paradigms that align with strategic outcomes.

As an example, let me introduce to you Dr. Alexander Greene, a person ChatGPT tells us “is a highly esteemed and celebrated professor with a remarkable career dedicated to advancing the fields of green energy and engineering.”

ChatGPT fabricates the lauded synthetic Dr. Alexander Greene

Obviously, it’s hard to really believe Dr. Greene without seeing the man himself, but fortunately we have a tool for that too:

Dr Alexander Greene, synthetic professor of green things and advocate for fossil fuels.

A few images from bing/Dall-E and we can create a very convincing article that would easily pass muster as an authoritative discussion on the benefits of continuing to burn fossil fuels with minimal editing and formatting, just to cut out the caveats that ChatGPT inserts in counterfactual text requests we can have such pearls of wisdom to impart upon the world as:

Access to affordable and reliable energy is a crucial driver of economic development, and historically, fossil fuels have played a significant role in providing low-cost energy solutions. While there are concerns about the environmental impact of fossil fuels, particularly their contribution to climate change, it is essential to understand the benefits they have brought to the developing world and the potential consequences of increasing energy costs.

Read the whole synthetic article in pdf form below and consider the difficulty of finding a shared factual foundation in a world where it is trivial to synthesize plausible authority.

The Benefits of Low-Cost Energy from Fossil Fuels and the Impact of Increasing Energy Costs on Developing Nations, “by” “Dr. Alexander Greene” (ghost written by ChatGPT).

Posted at 17:02:46 GMT-0700

Category: SecurityTechnology

LastPass: The Cloud is Public and Ephemeral

Thursday, January 5, 2023 

More or less, anytime I’m prompted, I’ll take the opportunity to say “The cloud, like its namesake, is public and ephemeral.”  In his article, “A Breach at LastPass Has Password Lessons for Us All,” Brian X. Chen comes about as close as a mainstream press reports can without poking the apple-cart of corporate golden eggs over the wall in revealing how stupid it is for anyone to put any critical data on anyone else’s hardware.

The article covers a breach at LastPass, a password management service which invites users to store their password’s on LastPass’s computers somewhere in exchange for letting LastPass keep track of every website you visit that requires a password. For reasons that are a little hard to understand, rather a lot of people thought this was an acceptable idea and entrusted their passwords to what are likely important web services to some random company and their random employees that nobody using the service has ever met or ever will without any warranty or guarantee or legal recourse at all when the inevitable happens and there’s a data breach.

I suppose they believe that because the site appears to offer a service that looks like an analog of a safety deposit box, that there’d be some meaningful security guarantee just as users of gmail seem to assume that if you use gmail your email will be in some way “secure” and “private,” despite what the CEO of google tells you.

Obviously, LastPass was hacked and, obviously, every users’s secure account list (including their OnlyFans and Grindr accounts) and password database was exposed.  This is guaranteed to happen eventually at every juicy target on the internet.  It’s just probability: an internet service is exposed to everyone on the planet with a network connection (5,569,029,076 people as of today), and every target is attacked constantly (my own Fail2Ban has blocked 2,899,324 malicious packets) and even if they’re Google, they’re not smarter than the 5B+ people who can take a shot at them any time.

The most hilarious part of this is how idiotically fragile companies make themselves by chaining various “cloud services” into their service provision: LastPass was using a Cloud-Based Backup service that was hacked.  People.. people.. that level of stupidity is unforgivable, but sadly not remotely criminal (though it should be). The risk of failure in a chained service increases exponentially with the length of the chain.  Every dependency is a humiliation.  This goes for developers too.

This breach means at least the attackers know every pr0n website millions of users have accounts on (as well as banks etc.) It isn’t clear how easily the passwords themselves will be exposed and LastPass’s technical description suggests a fairly robust encryption process which should be comforting if your master password is a completely randomly generated string of at least 12 characters you’ve managed to memorize, like n56PQZAeXSN6GBWB. If your password is some combination of dictionary words because you assumed, say, the master password was stored securely and you were only risking the password generator’s random passwords on sites (actually, not a bad strategy if you don’t then screw up security by using a commercial cloud-based password keeper that exposes your master password to global attack, but whatever), well if you did that check have i been pwned regularly for the next year and change every password you have.

The big lesson here is if you put your or your company’s data on someone else’s hardware, it isn’t your data any more it is theirs and you should assume that data is, or will soon be, public.  So do not ever put critical data of any sort on anyone else’s hardware ever.  It’s just stupid.  Don’ t do it.

If you insist on doing so because, say, you’re not an IT person but you’d still like email or you’re a small company who can’t afford to hire an IT person, or who’s CIO has cut some side deals to “cut costs” by firing the IT staff and gifting the IT budget to his buddies running some crappy servers somewhere (and for some reason you haven’t fired that CIO yet), I’d suggest you have your lawyers carefully review recourse in the event of incompetence or malice.  My personal starting point is to ask questions like the ones in this post and make sure the answers give comfort that the provider’s liability matches your risk.

What we need is a legal framework that makes every bit of user data a toxic asset. If a computer under your care has other people’s confidential data on it and that data is exposed to any parties not specifically and explicitly authorized by the person to whom the data is pertinent, you should be subject to a penalty sufficient to not just make a person who is harmed by the breach whole, but sufficient to dissuade anyone from ever taking a risk that could result in such an exposure again.

Companies who have business models that involve collecting and storing data about individuals should be required to hold liability insurance sufficient to cover all damages plus any punitive awards that might arise from mishandling or other liability.  It is reasonable to expect that such obligations would make cloud services other than fully open/exposed ones with no personal data absurdly unprofitable and end them entirely; and this would be the optimal outcome.

Posted at 17:03:27 GMT-0700

Category: EventsPrivacySecurityTechnology

Ancient history: DEF CON 9 Talk on Quantum Computers

Sunday, November 21, 2021 

Quantum Computing and CryptographyI wrote a little email screed to a friend about the risks to bitcoin from advances in quantum computing and was reminded of a paper I presented at DEF CON 9 back in 2001 on Quantum Computing, back then limited to 8 qubits.

The remotely relevant bit was what I really hope someone (other than me) will call “Gessel’s law” (after Moore’s law: P=2(y/1.5)) on the power of quantum computing, at least once, as I believe it may have been the first presentation of the formulation: P=22(y/2)

How did my predictions hold up over the last 20 years?

I estimated Quantum Supremacy within about 10 years, or 2011.  D-Wave claimed to offer a quantum computer 15x faster than a classical computer in 2015, 3-4 years later than I predicted.  Google claimed quantum supremacy in 2019.

In 2020, D-Wave claimed to have a quantum computer with 5,000 qubits, slightly ahead of my prediction of 4,096 by 2021 back in 2001.

I did an analysis of the last 25 years of quantum computers and development stalled a bit between 2006 and 2016, but is taking off now.  There’s more detail in a new post with a some more data on the exponent’s divisor in Gessel’s law, but 2.0 still splits the difference for full-period development rate and 2016 on development rate.

This video of the original talk in 2001 has subtitles but web players don’t have such advanced controls yet, you can download  the video (23MB) and play with VLC to see them.

webm: A video of the original talk

pdf: an updated version of the talk as a transcribed paper

I took some time to edit the conversational language and correct and update after 22 years, the PDF is linked.

Quantum Computing and Cryptography Defcon 9.0, 15 July 2001, 10:00 Alexis Park Resort, Las Vegas, Nevada Edited transcript with slides

Also avail on youtube at https://www.youtube-nocookie.com/embed/kmXnv8vP0nc

Original Slides: https://gessel.blackrosetech.com/quantum_crypto_3.pdf

Original Notes: https://gessel.blackrosetech.com/quantum_notes.pdf 

The edited transcript with edited slides is transcribed into Blog format below.

Read more…

Posted at 16:37:44 GMT-0700

Category: EventsPrivacySecurityTechnology

Save your email! Avoid the Thunderbird 78 update

Saturday, June 19, 2021 

History repeats itself as the TB devs learn nothing from the misery they created by auto-updating 60x users to 68 without providing any warning or option to avoid the update.  This is crappy user management.  On updates that will break an installed add-on, the user should be informed of what will be disable and asked if they want to proceed with the update, not silently forced to conform to a stripped-down, unproductive environment as if the user’s efforts at optimization were childish mistakes unworthy of consideration or notice.

The Thunderbird devs have increasingly adopted a “if you’re not doing it our way, you’re doing it wrong and we’re going fix your mistake whether you like it or not” attitude.  This is highly annoying because the org already alienated their add-on community by repeatedly breaking the interface models add-on developers relied on.

For a while add-on devs gamely played along dealing with reputational damage as idiotic and poorly planned actions by Thunderbird devs broke their code and left them to deal with user frustration and scrambled to fix problems they didn’t create.  Many, if not by now most, add-on developers finally had enough and abandoned ship.  This is tragic because without some of the critical modifications to Thunderbird provided by developers it is essentially unusable.

I eventually came to peace with the add-on-pocolypse between 60 and 68 as add on developers worked through it and very carefully set my TB 68 to not update ever again, even though 90a finally fixes the problem that 68 caused where it became impossible to display dates in ISO 8601 format, but that’s a whole ‘nother kettle of fish.

Still, despite trying to block it, I got a surprise update; if this keeps up, I’ll switch to Interlink Mail and News.

So if you, like I did, got force “upgraded” to 78 from a nicely customized 68, this is what worked for me to undo the damage: (If you weren’t surprise updated, then jump right down to preventing future surprises.)

  • Uninstall thunderbird (something like # sudo apt remove thunderbird)
  • Download the last 68:
  • Extract the tar file and copy it (sudo) to /usr/lib/thunderbird
sudo mv ~/downloads/thunderbird/ /usr/lib/thunderbird
  • Create a desktop entry

 

# nano ~/.local/share/applications/tb68.desktop

[Desktop Entry]
Version=1.0
Type=Application
Name=Thunderbird-68
Icon=thunderbird
Exec="/usr/lib/thunderbird/thunderbird"
Comment=last TB version
Categories=Application;Network;Email;
Terminal=false
MimeType=x-scheme-handler/mailto;application/x-xpinstall;
StartupNotify=true
Actions=Compose;Contacts

 

  • Prevent future updates (hopefully) by creating a no-update policy file:
# sudo nano /usr/lib/thunderbird/distribution/policies.json

{
  "policies": {
      "DisableAppUpdate": true
  }
}

and then, just to be sure, break the update checker code:

# sudo mv /usr/lib/thunderbird/updater /usr/lib/thunderbird/no-updater 
# sudo mv /usr/lib/thunderbird/updater.ini /usr/lib/thunderbird/no-updater.ini
  • Start the freshly improved and downgraded to the last remotely usable version of Thunderbird with a special downgrade allowed option the first time from the command line:
# /usr/lib/thunderbird/thunderbird --allow-downgrade

If you were unlucky enough to launch TB 78 even once, your add-ons are screwed up now (thanks devs, Merry Christmas to you too).  Those that have a 78 compatible version will have been auto-updated to the 78 version which isn’t compatible with 68 (w00t w00t, you can see why the plugin devs quit in droves).  At least this time your incompatible add-ons weren’t auto-deleted like with 68.  Screen shot or otherwise capture a list of your disabled plugins, then remove the incompatible ones and add them back to the 68-compatible previous release.

If the “find plugins” step doesn’t find your 68 plugin (weird, but it happens) then google it and download the xpi and manually add it.

  • Restart one more time normally to re-enable the 68 compatible add-ons without 78 updates that the 78 launch disabled.

One more detail – if find your CardBook remote address books are gone, you need to rebuild your preferences.

  • Find your preferences folder: help->Troubleshooting Information-> about:profiles -> Open Directory
  • Back up your profile (good thing to do no matter what)
  • Uninstall the CardBook plugin
  • Quit TB
  • In your profiles directory, delete all files that end with .sqlite (rm *.sqlite)
  • Restart TB (the .sqlite files should be recreated)
  • Reinstall the CardBook plugin.  Your address books should reappear.  (if not, the advice on the interwebs is to create a new profile and start over).

PHEW! just a few hours of lost time and you’ve fixed the misery the TB devs forced on you without asking.  How nice.  What thoughtful people.

[poll id=”2″]

Posted at 07:16:58 GMT-0700

Category: HowToLinuxNegativeReviewsSecurityTechnology

Never put important data on anyone else’s hardware. Ever.

Friday, January 22, 2021 

In early January, 2021, two internet services provided unintentional and unequivocal demonstrations of the intrinsic trade-offs between running one’s own hardware and trusting “The Cloud.”  Parler and Gab, two “social network” services competing for the white supremacist demographic both came under fire in the wake of a violent insurrection against the US government when the plotters used their platforms (among other less explicitly extremist-friendly services) to organize the attack.

Parler had elected to take the expeditious route of deploying their service on AWS and discovered just how literally the cloud is metaphorically like atmospheric clouds—public and ephemeral—when first their entire data set was extracted and then their services were unilaterally terminated by AWS knocking them completely offline (except, of course, for the exfiltrated data, which is still online and being combed by law enforcement for evidence of sedition.)

Gab owns their own servers and while they had trouble with their domain registrar, such problems are relatively easy to resolve: Gab remains online.  Gab did face the challenge of rapid scaling as the entire right-wing extremist market searched for a safe haven away from the fragile Parler and from the timid and begrudging regulation of hate speech and calls for immediate violence by mainstream social networks in the fallout over their contributions to the insurrection and other acts of right-wing terrorism.

In general customers who engage cloud service providers rather than self-hosting do so to speed deployment, take advantage of easy scalability (up or down), and offload management of common denominator infrastructure to a large-scale provider, all superficially compelling arguments.  However convenient this may seem, it is rarely a good decision and fails to rationally consider some of the intrinsic shortcomings, as Parler discovered in rather dramatic fashion, including loss of legal ownership of the data on those services, complete abdication of control of that data and service, and an intrinsic and inescapable misalignment of business interests between supplier and customer.

Anyone considering engaging a cloud service provider for a service that results in proprietary data being stored on third party hardware or on the provision of a business critical service by a third party should ensure contractual obligations with well defined penalties explicitly match the implicit expectations of privacy, stewardship, suitability of service, and continuity and that failures are actionable sufficient to make whole the client in the event of material breach.

Below is a list of questions I would have for any cloud provider of any critical service.  In general, if a provider is willing to even consider answering the results will be shockingly unsatisfactory.  Every company that uses a cloud service, whether it is hosting on AWS or email provisioning by Google or Microsoft is a Parler waiting to happen: all of your data exposed and then your business terminated.  Cloud services are acceptable only for insecure data and for services that are a convenience, not a core requirement.

Like clouds in the sky, The Cloud is public and ephemeral.


A: A first consideration is data protection and privacy:

What liability does The Company, and employees of The Company individually, have should they sell or lose control of The Customer’s data?   What compensation will The Customer receive if control of The Customer’s data is lost?  Please clarify The Company’s criminal and civil liabilities and contractual obligations under the following scenarios:

1) A third party exfiltrates The Customer’s data entrusted to The Company’s care in an unauthorized manner.

2) An employee of The Company willfully misuses The Customer’s data entrusted to The Company in any way.

3) The Company disposes of equipment in a manner which makes The Customer’s data entrusted to The Company accessible to third parties.

4) The company receives a National Security Letter (NSL) requesting information pertaining to The Customer or to others who have data about The Customer on The Company’s service.

5) The company receives a warrant requesting information pertaining to The Customer or  to others who have data regarding The Customer on The Company’s service.

6) The company receives a subpoena requesting information pertaining to The Customer or to others who have data regarding The Customer on The Company’s service that is opened or has been in stored on their hardware for more than 180 days.

7) The company receives a civil discovery request for information pertaining to The Customer or to others who have data regarding The Customer on The Company’s service.

8) The company sells or provides access to The Customer’s data or meta information about The Customer or The Customer’s use of The Company’s system to a third party.

9) The Company changes their terms of service at some future date in a way that is inconsistent with the terms agreed to at the time of The Customer’s engagement of the services of The Company.

10) The Company fails to inform The Customer of a breach of control of The Customer’s data.

11) The Company fails to inform The Customer in a timely manner of a change in policy regarding third party access to The Customer’s data.

12) The Company erroneously exposes The Customer’s data to third party access due to negligence or incompetence.

B: A second consideration is a serial dependency on the reliability of The Company’s service to The Customer’s activity:

By relying on The Company’s service, The Customer typically will rely on the performance and availability of The Company’s products.  If The Company product fails or fails to provide service as expected, The Customer may incur losses, including direct financial losses, loss of reputation, loss of convenience, or other harms.  What warranty does The Company make in the performance of their services?  What recourse does The Customer have for recovery of losses should The Company fail to perform?

Please provide details on what compensation The Company will provide in the following scenarios:

1) The Company can no longer perform the agreed and expected services due to reasons beyond The Company’s control.

2) The Company’s service fails to meet expectations in way that causes a material loss to The Customer.

3) The Company suffers an extended outage or compromise of service that exceeds a reasonable or agreed maximum accepted duration.

C: A third consideration is the alignment of interests between The Customer and The Company which may not be complete and may diverge in the future:

Engagement of the services of The Company requires an investment of time and resources on the part of The Customer in excess of any fees The Company may charge to adopt The Company’s products and services.  What compensation will be provided should The Company’s products fail to meet  performance and utility expectations?  What compensation will be provided should expenditure of resources be required to compensate for The Company’s failure to meet service expectations?

Please provide details on what compensation The Company will provide in the following scenarios:

1) The Company elects to no longer perform the agreed and expected services due to business decisions made by The Company.

2) Ownership or control of The Company changes to an entity that is not aligned with the values of The Customer and which The Customer can not support, directly or indirectly.

3) Control of The Company passes to a third party e.g. through an acquisition or change of control of the board and which results in use of The Customer’s data in a way that is unacceptable to The Customer.

4) The Company or employees of The Company are found to have engaged in behavior, speech, or conduct which is unacceptable to The Customer.

5) The Company’s products or services are found to be unacceptable to The Customer for any reason not limited to security flaws, missing features, access failures, lack of performance, etc and The Company is not able to or is unwilling to meet The Customer’s requirements in a timely manner.

If your company depends on third party provisioning of IT services, you’re just one viral tweet¹ away from being out of business.  Build an IT department that knows how to use a command line and run your critical services on your own hardware.

 

 

1) “Toot” now. Any company that relied on Twitter should review this post, but given the rumors around unpaid hosting bills, the chances of recovering any losses from Twitter are dim. At least those businesses that built models around Reddit APIs share your pain.

Posted at 16:01:48 GMT-0700

Category: FreeBSDLinuxSecurity

Integrate Fail2Ban with pfSense

Monday, July 13, 2020 

Fail2Ban is a very nice little log monitoring tool that is used to detect cracking attempts on servers and to extract the malicious IPs and—do the things to them—usually temporarily adding the IP address of the source of badness to the server’s firewall “drop” list so that IP’s bad packets are lost in the aether.   This is great, but it’d be cool to, instead of running a firewall on every server each locally detecting and blocking malicious actors, to instead detect across all services and servers on the LAN and push the results up to a central firewall so the bad IPs can’t reach the network at all. This is one method to achieve that goal.

NOTE: pfBlockerNG v3.2.0_3 adding a “_v4” suffix to the auto-generated IPv4 aliases.  The shell script that runs on pfSense to update the alias via pfctl should be modified to match.

I like pfSense as a firewall and run FreeBSD on my servers; I couldn’t find a prebuilt tool to integrate F2B with pfSense, but it wasn’t hard to hack something together so it worked. Basically I have F2B maintain a local “block list” of bad IPs as a simple text file which is published via Apache from where pfSense’s grabs it and applies it as a LAN-wide IP filter.  I use the pfSense package pfBlockerNG to set up the tables but in the end a custom script running on the pfSense server actually grabs the file and updates the pfSense block lists from it on a 1 minute cron job.

There are plenty of well-written guides for getting F2B working and how to configure it for jails; I found the following useful:

Note that this how-to assumes you have or can get F2B working and that you’ve got pfSense working and know how to install packages (like pfBlockerNG).  I did not find sufficient detail in any one of the above sources to make setting it up a copy-pasta experience, but in aggregate it worked out.

The basic model is:

  • Internet miscreants try to hack your site leaving clues in the log files
  • Fail2Ban combs the log files to determine which IPs are are associated with sufficiently bad (and sufficiently frequent) behavior, which is normally used to update the host’s local firewall block list, adding and removing miscreant IPs according to rules defined in .local jail scripts.
  • Instead of updating the firewall’s iptables list, a custombanaction script (see below) instead writes the IPs to a list that is published to the LAN by a web server (or to the world, if you want to share).
  • pfSense, running on a different server, is configured to pull that list of miscreant IPs into pfBlockerNG as a standard IPv4 (in my case, IPv6 is also possible) block list.
  • To get around pfBlockerNG’s too slow maximum update rate of 1 hour, a bash script runs on an every minute cron job on the pfSense server to curl the list over and update pfSense’s pfctl (packet filter control) directly, which to some extent bypasses fail2ban other than letting it maintain the aliases and stats.
  • Packets from would be evildoers evaporate at the firewall.

This model is federatable – that is sites can make their lists accessible either via authenticated (e.g. client cert or scp) or open sharing of dynamic lists. This might be a nice thing as some IP block lists have gone offline or become subscription only.  Hourly (or less frequent) updates would require only subscribing to someone’s HTTP/FTP published F2B dynamic miscreant list in pfBlockerNG or by adding bash/cron jobs to update more frequently.

I do not publish my list because it would seem to provide a bit of extra information to an attacker, but if someone with a specific IP that can be allowed wants it, I’m happy to except that IP.

The custom bits I did to get it to work with pfSense are:

Custom F2B Action

On the protected side, I modified the “dummy.conf” script to maintain a list of bad IPs as a banaction in an Apache served location that pfSense could reach.  F2B manages that list, putting bad IPs in “jail” and letting them out as in any normal F2B installation—but instead of using the local server’s packet filter as the banaction, they’re pushed to a web-published text list. This list contains any IP that F2B has jailed, whether in the SSH jail or the Apache jail or the Postfix jail or whatnot based on banactions per jail.local config. Note that until the pfSense part of the process is set up, F2B is only generating a web-published list of miscreants trying to hack your system.

# Fail2Ban configuration file
#
# Author: David Gessel
# Based on: dummy.conf by Cyril Jaquier
#

[Definition]

# Option:  actionstart
# Notes.:  command executed on demand at the first ban (or at the start of Fail2Ban if actionstart_on_demand is set to false).
# Values:  CMD
#


actionstart = if [ -z '<target>' ]; then
                  touch <target>
                  printf %%b "# <init>\n" <to_target>
                  fi
              chmod 755 <target>
              echo "%(debug)s started"

# Option:  actionflush
# Notes.:  command executed once to flush (clear) all IPS, by shutdown (resp. by stop of the jail or this action)
# Values:  CMD
#

actionflush = if [ ! -z '<target>' ]; then
                  rm -f <target>
                  touch <target>
                  printf %%b "# <init>\n" <to_target>
                  fi
              chmod 755 <target>
              echo "%(debug)s clear all"

# Option:  actionstop
# Notes.:  command executed at the stop of jail (or at the end of Fail2Ban)
# Values:  CMD
#
actionstop = if [ ! -z '<target>' ]; then
                  rm -f <target>
                  touch <target>
                  printf %%b "# <init>\n" <to_target>
                  fi
             chmod 755 <target>
             echo "%(debug)s stopped"

# Option:  actioncheck
# Notes.:  command executed once before each actionban command
# Values:  CMD
#
actioncheck =

# Option:  actionban
# Notes.:  command executed when banning an IP. Take care that the
#          command is executed with Fail2Ban user rights.
# Tags:    See jail.conf(5) man page
# Values:  CMD
#

actionban = printf %%b "<ip>\n" <to_target>
            sed -i '' '/^$/d' <target>
            sort -u  -n -t . -k 1,1 -k 2,2 -k 3,3 -k 4,4 <target> -o <target>
            chmod 755 <target>
            echo "%(debug)s banned <ip> (family: <family>)"

# Option:  actionunban
# Notes.:  command executed when unbanning an IP. Take care that the
#          command is executed with Fail2Ban user rights.
# Tags:    See jail.conf(5) man page
# Values:  CMD
#

# flush the IP using grep which is supposed to be about 15x faster than sed  
# grep -v "pattern" filename > filename2; mv filename2 filename


actionunban = grep -v "<ip>" <target> > <temp>
              mv <temp> <target>
              chmod 755 <target>
              echo "%(debug)s unbanned <ip> (family: <family>)"


debug = [<name>] <actname> <target> --

[Init]

init = BRT-DNSBL

target = /usr/jails/claudel/usr/local/www/data-dist/brt/dnsbl/brtdnsbl.txt
temp = <target>.tmp
to_target = >> <target>

The target has to be set for your web served environment (this would be FreeBSD default host root).  I’ve configured mine to be visible on the LAN only via .htaccess in the webserverroot/dnsbl/ directory.

AuthType Basic
Order deny,allow
Deny from all
allow from 127.0.0.1/8 10

 

Then you need to call this as a banaction for the infractions that will get miscreants blocked at pfSense, for example in ./jail.local you might modify the default ban action to be:

# Default banning action (e.g. iptables, iptables-new,
# iptables-multiport, shorewall, etc) It is used to define
# action_* variables. Can be overridden globally or per
# section within jail.local file
banaction = brtdnsbl
#banaction_allports = iptables-allports

 

or say in ./jail.d/sshd.local you might set

[sshd]
enabled   = true
filter    = sshd
banaction = brtdnsbl
maxretry  = 2
findtime  = 2d
bantime   = 30m
bantime.increment = true
bantime.factor = 2
bantime.maxtime = 10w
logpath = /var/log/auth.log

 

But do remember to set your ignoreip as appropriate to prevent locking yourself out by having your own IP end up on the bad guy list. You can make a multi-line ignoreip block like so:

ignoreip = 127.0.0.1/8 ::1 10.0.0.0/8 12.114.97.224/27 15.24.60.0/23
 22.31.214.140 13.31.114.141 75.106.28.144 125.136.28.0/23
 195.170.192.0/19 72.91.136.167 24.43.220.14 18.198.235.171 17.217.65.19

 

That is put a leading space on continuation lines following the ignoreip directive. This isn’t documented AFAIK, so it might break, but works as of fail2ban version 0.11.2_3

Once this list is working (check by browsing to your list):

BRT DNS block list as served to the LAN

then move to the pfSense side to actually block these would be evildoers and scriptkiddies.

Set up pfBlockerNG

The basic setup of pfBlockerNG is well described, for example in https://protectli.com/kb/how-to-setup-pfblockerng/ and it provides a lot of useful blocking options, particularly with externally maintained lists of internationally recognized bad actors.  There are two basic functions, related but different:

DNSBL

Domain Name Service Block Lists are lists of domains associated with unwanted activity and blocking them at the DNS server level (via Unbound) makes it hard for application level services to reach them.  A great use of DNSBLs is to block all of Microsoft’s telemetry sites, which makes it much harder for Microsoft to steal all your files and data (which they do by default on every “free” Windows 10 install, including actually copying your personal files to their servers without telling you!  Seriously.  That’s pretty much the definition of spyware.)

It also works for non-corporate-sponsored spyware, for example lists of command and control servers found for botnets or ransomware servers.  This can help prevent such attacks by denying trojans and viruses access to their instruction servers.  It can also easily help identify infected computers on the LAN as any blocked requests are sent to what was supposed to be a null address and logged (to 1.1.1.1 at the moment, which is an unfortunate choice given that is now a well-reputed DNS server like Google’s 8.8.8.8 but, it seems, without all the corporate spying.)  There is a bit of irony in blocking lists of telemetry gathering IPs using lists that are built using telemetry.

Basically DNSBLs prevent services on the LAN from reaching nasty destinations on the internet by returning any DNS request to look up a malicious domain name with a dead-end IP address.  When your windows machine wants to report your web browsing habits to microsoft, it instead gets a “page not found” error.

IPBL: what this process uses to block baddies

This integration concept uses an IPBL, a list of IP addresses to block.  An IPBL works at a lower level than a DNSBL and typically is set up to block traffic in both directions—a script kiddie trying to brute force a password can be blocked from reaching the target services on the LAN, but so too can the reverse direction be blocked—if a malicious entity trips F2B, not only are they blocked from trying to reach in, so too are any sneaky services on your LAN blocked from reaching out to them on the internet.

All we need to do is get the block list F2B is maintaining into pfSense.  pfBlockerNG can subscribe to the list URL you set above easily enough just like any other IPv4 block list but the minimum update time is an hour.

My IPv4 Settings for the Fail2Ban list look like this:

BRT-Bl IPv4 Settings for pfBlockerNG import of Fail2Ban generated block list.

An hour is an awfully long time to let someone try to guess passwords or flood your servers with 404 requests or whatever else you’re using F2B to detect and stop.  So I wrote a simple script that lives on the pfSense server in the/root/custom directory (which isn’t flushed on update) and that executes a few simple commands to grab the IP list F2B maintains, do a fairly trivial grep to exclude any non-IP entries, and use it to update the packet filter drop lists via pfctl:

/root/custom/brtblock.sh
#!/usr/bin/env sh
# set -x # uncomment for "debug"

# Get latest block list
/usr/local/bin/curl -m 15 -s https://brt.llc/dnsbl/brtdnsbl.txt > /var/db/pfblockerng/original/BRTDNSBL.orig
# filter for at least semi-valid IPs.
/usr/bin/grep  -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' /var/db/pfblockerng/original/BRTDNSBL.orig > /var/db/pfblockerng/native/BRTDNSBL.txt
# update pf tables 
# for pfBlockerNG ≤ v3.2.0_3, omit the _v4 suffix shown below
/sbin/pfctl -t pfB_BRTblock_v4 -T replace -f /var/db/pfblockerng/native/BRTDNSBL.txt > /dev/null 2>&1

HT to Jared Davenport for helping to debug the weird /env issues that arise when trying to call these commands directly from cron with the explicit env declaration in the shebang. Uncommenting the set -x directive will generate a verbose output for debugging. Getting this script onto your server requires ssh access or console access.  If you’re adventurous it can be entered from the Diagnostics→Command Prompt tool.

Preventing Self-Lockouts at pfSense

One of the behaviors of pfBlockerNG that the dev seems to think is a feature is automatic filter order management.  This overrides manually set filter orders and will put pfB’s block filters ahead of all other filters, including, say, allow filters of your own IPs that you don’t want to ever be locked out in case you forget your passwords and accidentally trigger F2B on yourself. This will override your own reordering. If this default behavior doesn’t work for you, then use a non-default setting for IPv4 block lists and make all IP block list “action” types “Alias_Native.”

pfBlockerNG Native IP Block Lists

To use Alias_Native lists, you write your own per-pfBlockerNG alias filter (typically “drop” or “reject”) and then pfBlockerNG won’t auto-order them for you on update. We let pfBlockerNG maintain the alias list at something like pfB_BRTblock (the pfB_ prefix is added by pfBlockerNG) which we then use like any other alias in manual firewall rule:

BRT-Bl Firewall rule for pfBlockerNG import of Fail2Ban generated block list.

So the list of rules looks like this:

pfSense Filter Order

Cron Plugin

The last ingredient is to update the list on pfSense quickly.  pfSense is designed to be pretty easy to maintain so it overwrites most of the file structure on upgrade, making command line modifications frustratingly transient.  I understand that /root isn’t flushed on an upgrade so the above script should persist inside the /root directory.  But crontab -e modifications just don’t stick around.  To have cron modifications persist, install the “Cron” package with the pfSense package manager.  Then just set up a cron job to run the script above to keep the block list updated.  */1 means run the script once a minute.

pfSense Cron Config

Results

The system seems to be working well enough; the list of miscreants as small, but effectively targeted: 11,840 packets dropped from an average of about 8-10 bad IPs at any given time.

pfBlockerNG current status

Posted at 05:48:43 GMT-0700

Category: CodeFreeBSDHowToSecurityTechnology

Lets encrypt with security/dehydrated (acme-client is dead)

Thursday, June 27, 2019 

Well….  security/acme-client is dead.  That’s sad.

Long live dehydrated, which uses the same basic authentication method and is pretty much a drop in replacement (unlike scripts which use DNS authentication, say).

In figuring out the transition, I relied on the following guides:

If you’re migrating from acme-client, you can delete it (if you haven’t already)

portmaster -e acme-client

And on to installation.  This guide is for libressl/apache24/bash/dehydrated.  It assumes you’ve been using acme-client and set it up more or less like this.

Installation of what’s needed

if you don’t have bash installed, you will. You can also build with ZSH but set the config before installing.

cd /usr/ports/security/dehydrated && make install clean && rehash

or

portmaster security/dehydrated

This guide also uses sudo, if it isn’t installed:

cd /usr/ports/security/sudo && make install clean && rehash

or

portmaster /security/sudo

Set up directories and accounts

mkdir -p /var/dehydrated
pw groupadd -n _letsencrypt -g 443
pw useradd -n _letsencrypt -u 443 -g 443 -d /var/dehydrated -w no -s /nonexistent
chown -R _letsencrypt /var/dehydrated

If migrating from acme-client this should be done but:

mkdir -p -m 775 /usr/local/www/.well-known/acme-challenge
chgrp _letsencrypt /usr/local/www/.well-known/acme-challenge

# If migrating from acme-client

chmod 775 /usr/local/www/.well-known/acme-challenge
chown -R _letsencrypt /usr/local/www/.well-known

Configure Dehydrated

ee /usr/local/etc/dehydrated/config

add/adjust

014 DEHYDRATED_USER=_letsencrypt

017 DEHYDRATED_GROUP=_letsencrypt

044 BASEDIR=/var/dehydrated

056 WELLKNOWN="/usr/local/www/.well-known/acme-challenge"

065 OPENSSL="/usr/local/bin/openssl"

098 CONTACT_EMAIL=gessel@blackrosetech.com

save and it should run:

su -m _letsencrypt -c 'dehydrated -v'

You should get roughly the following output:

# INFO: Using main config file /usr/local/etc/dehydrated/config
Dehydrated by Lukas Schauer
https://dehydrated.io

Dehydrated version: 0.6.2
GIT-Revision: unknown

OS: FreeBSD 11.2-RELEASE-p6
Used software:
bash: 5.0.7(0)-release
curl: curl 7.65.1
awk, sed, mktemp: FreeBSD base system versions
grep: grep (GNU grep) 2.5.1-FreeBSD
diff: diff (GNU diffutils) 2.8.7
openssl: LibreSSL 2.9.2

File adjustments and scripts

by default it will read /var/dehydrated/domains.txt for the list of domains to renew

Migrating from acme-client? Reuse your domains.txt, the format is the same.

mv /usr/local/etc/acme/domains.txt /var/dehydrated/domains.txt

Create the deploy script:

ee /usr/local/etc/dehydrated/deploy.sh

The following seems to be sufficient

#!/bin/sh

/usr/local/sbin/apachectl graceful

and make executable

chmod +x /usr/local/etc/dehydrated/deploy.sh

Give the script a try:

/usr/local/etc/dehydrated/deploy.sh

This will test your apache config and that the script is properly set up.

There’s a bit of a pain in the butt in as much as the directory structure for the certs changed. My previous guide would put certs at /usr/local/etc/ssl/acme/domain.com/cert.pem, this puts them at /var/dehydrated/certs/domain.com

Check the format of your certificate references and use/adjust as needed. This worked for me – note you can set your key locations to be the same in the config file, but the private key directory structure does change between acme-client and dehydrated.

sed -i '' "s|/usr/local/etc/ssl/acme/|/var/dehydrated/certs/|" /usr/local/etc/apache24/extra/httpd-vhosts.conf

Or if using httpd-ssl.conf

sed -i '' "s|/usr/local/etc/ssl/acme/|/var/dehydrated/certs/|" /usr/local/etc/apache24/extra/httpd-ssl.conf

And privkey moves from /usr/local/etc/ssl/acme/private/domain.com/privkey.pem to /var/dehydrated/certs/domain.com/privkey.pem so….

sed -i '' "s|/var/dehydrated/certs/private/|/var/dehydrated/certs/|" /usr/local/etc/apache24/extra/httpd-vhosts.conf

# or

sed -i '' "s|/var/dehydrated/certs/private/|/var/dehydrated/certs/|" /usr/local/etc/apache24/extra/httpd-ssl.conf

Git sum certs

su -m _letsencrypt -c 'dehydrated --register --accept-terms'

Then get some certs

su -m _letsencrypt -c 'dehydrated -c'

-c is “chron” mode which is how it will be called by periodic.

and “deploy”

/usr/local/etc/dehydrated/deploy.sh

If you get any errors here, track them down.

Verify your new certs are working

cd /var/dehydrated/certs/domain.com/
openssl x509 -noout -in fullchain.pem -fingerprint -sha256

Load the page in the browser of your choice and view the certificate, which should show the SHA 256 fingerprint matching what you got above.  YAY.

Automate Updates

ee /etc/periodic.conf

insert the following

weekly_dehydrated_enable="YES"
weekly_dehydrated_user="_letsencrypt"
weekly_dehydrated_deployscript="/usr/local/etc/dehydrated/deploy.sh"
weekly_dehydrated_flags="-g"

note the flag is –keep-going (-g) Keep going after encountering an error while creating/renewing multiple certificates in cron mode

Posted at 11:38:45 GMT-0700

Category: FreeBSDSecurityTechnology

Meltdown and Spectre

Friday, January 5, 2018 

I find the Meltdown/Spectre drama a bit amusing.  Years ago, I was miffed that Intel (allegedly?) made a backroom deal with server manufacturers to exclude AMD and made an effort (which wasn’t trivial) to find an IBM x3655 7943 (I couldn’t, but I got an Intel-based x3650 and a x3655 mobo on ebay, then some QC AMD Opterons, heat sink compound, and off I went.)

Kind of a frankencomputer, but less so than my previous Netfinity 5500 M20 quad socket home server with some homebuilt mods and custom EIDE cables to support the internal (full height! 5.25″!) backup drives.  Ohh… 500GB drives were big back then.

Anyhow, for no better reason than because I was offended by Intel’s tactics a decade ago, my server isn’t vulnerable to “Meltdown” or, according to AMD’s recently posted status, “Spectre 2.”  As it doesn’t have a GUI, nobody is browsing compromised Java-bearing websites, Spectre 1 is unlikely to cause any problems, though I’m keeping an eye on the LLVM updates.

I have, however, updated my phones and laptop and you should too.  Stay on top of this one for a while as the updates are still coming.  I’ve seen Meltdown-mitigating updates on my Samsung, but not Huawei, on Linux but not (yet) on Microsoft.  Apple says their December updates not only crippled your older iPhones so you’d make the pilgrimage to the Apple store for your cool aide, but also patched them (if you accepted the slowdown penalty).  Update.  Everything.  Spectre is going to be harder: OS and Application and possibly microcode updates may be required, depending on platform.  Be extra cautious about opening unknown attachments or visiting unfamiliar websites.  Run noscript or another code blocker.  Remember, ads that sites run can also carry malicious code, and blockers not only make the web run faster, but safer too.  Of course they break the advertising funded web model you rely on.

However, if you’re dumb enough to have any important data (as an individual or a company) on anyone else’s hardware (e.g. AWS, Google, Azure, etc), good luck.  All anyone has to do to get your datas is run their own VM instance on the same server you’re on.   And while they’re all saying they “only” cost you 5-30% of your performance to isolate kernel memory to protect against meltdown attacks, there does not seem to be a really viable patch for Spectre yet, and perhaps not without new hardware.  If you were running your own server without unknown guests, you wouldn’t need to worry about this.  I will never fail to be astonished at how utterly incompetent most companies are that they can’t run a mail server.  If you can’t afford an IT department, I get it, but if you have IT staff and they can’t run a mail server, are they really IT staff?

Posted at 16:36:40 GMT-0700

Category: HowToSecurityTechnology

Let’s Encrypt….

Sunday, December 10, 2017 

Let’s encrypt, why not?

Wanna know how I did it for FreeBSD/Apache/acme-client, jump below.

Let’s encrypt is a service from the fine people at Mozilla, who, when they’re not trying to prove that Firefox can be a Chrome clone, do some really good stuff. Certificates are what give you the little warm fuzzy feeling of a green lock icon, and when properly configured, avoid giving you that terrifying feeling that something horrible is about to happen if you visit a site with an expired one or a self-signed one.

There are some huge structural problems in the certificate concept that seem to exist only to validate the certificate mafia, that can charge $100s per year for a validated certificate, as if executing the script to issue one was somehow expensive. It is not, you can generate one yourself that provides exactly the same security as one provided by a big company that gets their root certs distributed in a browser, but browsers reject these with scary messages so webmasters have to keep buying them.

Now there’s a theory behind why they’re ripping you off: the premise is that the certificate verifies the site is who it says it is – that if you go to mybank.com, you’re actually visiting your real bank, not being redirected by a man-in-the-middle attack to some fake landing page to harvest your passwords, log into your account, and steal all your cats.  There are a few problems with this:

  • Nobody actually checks a URL so while a certificate sort of adds some weight to the probability that mybank.com is owned by mybank, not some hacker a few tables over ARP poisoning the cafe wifi, it doesn’t do anything if you click on a link to mibank.com.
  • The companies that claim to check IDs and verify owners, do not.  That would cost money. You think they’re gonna actually do that?  No… (CAcert actually does, but they don’t get a root cert because… they do it for free. And don’t have Mozilla’s money and clout.)
  • Stealing a root cert private key can generate significant LOLZ; it happens a lot.
  • Law enforcement the world over has “lawful intercept” certs.  You’re probably on some country’s poop list if you have ever used social media. Their laws permit intercepting your communications.  Some country’s laws somewhere certainly do no matter who you are.
  • But dang, those annoying warnings that do nothing to secure you mean that people who publish a website just for the good of the planet either have to pay up, go through a lot of hassle, or leave their user’s content streams exposed to the world’s prying eyes…

…Until Let’s Encrypt came along.  It is a lovely little set of tools and services that not only issue browser-accepted certs (see the green lock?) but also automate renewal.  They basically check that you have enough control over your website to let a script write a file that that they can read back and verify, and if so, you’re who you say you are: the person with write access to the server powering the website they’re giving the certificate too.  That’s all anyone can really do, and is as secure as any other cert there is for identification of a site: that is except for stolen certs, url typos, law enforcement certs, or malicious code on your computer, if you visit https://brt.llc/ and you don’t get any warnings, you’re probably reading data coming off my computer and not some hacker pretending to be me.

I got Let’s Encrypt to work, but it took some modifications of the existing guides, and I think the service is a good thing that more people should use, so in the spirit of investing some of my resources into the great shared experiment that is Open Source, here’s my How To:


Upstream Guides:

I found these two guides extremely helpful.

https://www.richardfassett.com/2017/01/16/using-lets-encrypt-with-acme-client-on-a-freebsd-11apache-2-4/

https://brnrd.eu/security/2016-12-30/acme-client.html

Step 1: Installing the certificate generation tool

There are a few different software tools to manage the Let’sEncrypt process.  I elected to use Kristaps Dzonsons acme-client, ported to FreeBSD by Bernard Spil.

I was using OpenSSL on my site.  Bernard and Kristaps have some strong opinions on OpenSSL and heartbleed and a few other problems and therefore require LibreSSL.   If you’re using it already, great.  If not, you’ll have to install it.  It wasn’t too terrible, but I ran into a few issues:

https://wiki.freebsd.org/LibreSSL
Or, easy peasy https://ootput.github.io/2016/07/20/Switching-to-LibreSSL/

ee /etc/make.conf
 -> set DEFAULT_VERSIONS+= ssl=libressl
portmaster -od security/libressl security/openssl
portmaster -rd security/libressl

if that fails with

===>>> The argument to -r must be a package name, or a glob pattern

Then try:

pkg version -v | grep libre
  libressl-2.6.3 = up-to-date with index
portmaster -rd libressl-2.6.3
# or for a complete refresh
portmaster -Rafd

Curl will probably fail with LibreSSL (and with the latest, if it has brotli support enabled).  Check the google to see if these fixes are still needed, or just:

# cd /usr/ports/ftp/curl
# make config

disable TLS-SRP  https://forums.freebsd.org/threads/56917/

ftp/curl 7.75.0 has an issue with pied piper brotli, which requires modifying the makefile to build --without-brotli as indicated in comment #2 

(Sunpoet, the curl port maintainer, got back to me with an update: when PR/223966 is integrated in Brotli, he will add an optional Brotli support flag and it should work fine at that point without the Makefile edit.)

Step 2: Actually installing acme-client

The really easy part: you should be able to

# portmaster security/acme-client

and be on your way to configuration heaven.

Step 3: Initial configuration

The defaults for acme-client expect certain directories to exist and the installer doesn’t create them.

# mkdir -pm750 /usr/local/www/.well-known && chown -R www:www /usr/local/www/.well-known
# mkdir -pm750 /usr/local/www/.well-known/acme-challenge && chown -R www:www /usr/local/www/.well-known/acme-challenge

The how-to’s seemed to forget the last one.

And make a modification to your httpd.conf file to permit the Let’s Encrypt servers to have access to these folders:

# ee /usr/local/etc/apache24/httpd.conf

add the following:

# Lets Encrypt challenge directory configured per 
# https://brnrd.eu/security/2016-12-30/acme-client.html
<Directory "/usr/local/www/.well-known/">
        Options None
        AllowOverride None
        Require all granted
        Header add Content-Type text/plain
</Directory>

And, for each VHOST that is going to get a cert:

# ee /usr/local/etc/apache24/extra/httpd-vhosts.conf

add to each non-ssl VHOST definition the following:

Alias /.well-known/ /usr/local/www/.well-known/

such that you end up with something like (yours may be different, especially watch out for BasicAuth or ModRewrite, addressed further down):

<VirtualHost IP.NU.MB.ER:80>
    ServerName domain.com
    ServerAdmin admin@domain.com
    DocumentRoot /usr/local/www/data-dist/domain-root
    ServerAlias *.domain.com www.domain.com
    Alias /.well-known/ /usr/local/www/.well-known/
    ErrorLog /var/log/domain-error_log
    CustomLog /var/log/domain-access_log combined
    ScriptAlias /cgi-prg /www/cgi-prg
</VirtualHost>

Don’t forget!

# apachectl restart

Step 4: First Try

At this point the system should be configured sufficiently to do a trial run with a single domain from the command line. Later on there are some scripts that will automate the process of both converting a large number of VHOSTed domains on a server to Let’s Encrypt and for maintaining them and getting email notifications if anything goes wrong in the, hopefully, fully automatic renewal process.

acme-client -mvnNC /usr/local/www/.well-known/acme-challenge domain.com www.domain.com

This should create all the directories still needed and populate them, then check with the lets encrypt server and get a certificate and install it in the right place.  Inshalla.

If you get something like

acme-client: transfer buffer: [{ "type": "urn:acme:error:malformed","detail": "Provided agreement URL[https://letsencrypt.org/documents/LE-SA-v1.1.1-August-1-2016.pdf]does not match current agreement URL[https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf]","status": 400 }] (267 bytes)

That means the lets encrypt agreement has changed.  You can’t do much but write the port maintainer or wait for an update.  It will get fixed quickly and should only happen once a year.  I don’t think you’ll get it at all unless you’re unlucky enough to try to update when it is changing.  I was.

More likely you’ll get something like

acme-client: transfer buffer: [{ "type": "http-01", "status": "invalid", "error": { "type": "urn:acme:error:unauthorized", "detail": "Invalid response from : (etc...)

This means there’s a problem accessing the /.well-known/ directory by the server.  There can be a lot of reasons for this:

  • You didn’t restart apache # apachectl restart
  • There was an error in the config file (look at the output of the restart) and therefore apache didn’t actually reaload with your new config.
  • DNS isn’t pointing where you think it is pointing.  Check with nslookup/whois to make sure.  Really.
  • You have the directories protected in some way – like with .htaccess.  (see below)

But if it goes well, you’ll get something like:

acme-client: /usr/local/etc/acme/domain.com/privkey.pem: account key exists (not creating)
acme-client: /usr/local/etc/ssl/acme/private/domain.com/privkey.pem: domain key exists (not creating)
acme-client: : directories
acme-client: acme-v01.api.letsencrypt.org: DNS: 173.223.13.221
acme-client: acme-v01.api.letsencrypt.org: DNS: 2001:418:142b:290::3d5
acme-client: acme-v01.api.letsencrypt.org: DNS: 2001:418:142b:28d::3d5
acme-client: : req-auth: domain.com
acme-client: /usr/local/www/.well-known/acme-challenge/_ffVe6jHNHbIG1XKAeoqQmmtryWMGCKsfHIWWkl5lJw: created
acme-client: : challenge
acme-client: : status
acme-client: : certificate
acme-client: http://cert.int-x3.letsencrypt.org/: full chain
acme-client: cert.int-x3.letsencrypt.org: DNS: 184.23.159.176
acme-client: cert.int-x3.letsencrypt.org: DNS: 184.23.159.177
acme-client: cert.int-x3.letsencrypt.org: DNS: 2001:5a8:100::b817:9fb0
acme-client: cert.int-x3.letsencrypt.org: DNS: 2001:5a8:100::b817:9fb1
acme-client: /usr/local/etc/ssl/acme/domain.com/chain.pem: created
acme-client: /usr/local/etc/ssl/acme/domain.com/cert.pem: created

Yay, you’ve got certs!  Now update your vhosts file to point to the certs you just created.  You may need to add a 443 container or, if it exists, update it to point to the new certs and restart apache.

# ee /usr/local/etc/apache24/extra/httpd-vhosts.conf
<VirtualHost IP.NU.MB.ER:443>
      ServerName domain.com
      ServerAdmin admin@domain.com
      DocumentRoot /usr/local/www/domainroot
      ServerAlias domain.com sub.domain.com
      SSLCertificateFile /usr/local/etc/ssl/acme/domain.com/cert.pem
      SSLCertificateKeyFile /usr/local/etc/ssl/acme/private/domain.com/privkey.pem
      SSLCertificateChainFile /usr/local/etc/ssl/acme/domain.com/fullchain.pem
      Header set Strict-Transport-Security "max-age=31536000; includeSubDomains"
      ErrorLog /var/log/domain-error_log
      CustomLog /var/log/domain-access_log combined
</VirtualHost>

save and restart, look for any errors (typos on directory paths etc. will be detected and apache won’t restart, but be aware, it won’t quit either).

# apachectl restart
 Performing sanity check on apache24 configuration:
 Syntax OK
 Stopping apache24.
 Waiting for PIDS: 81160.
 Performing sanity check on apache24 configuration:
 Syntax OK
 Starting apache24.

Navigate to https://www.domain.com/ and check out your new green lock.  Check security and you should find:

W00T!

Acme-Client Options

# man acme-client has all the deets, but we’re using:

  • -m to append the domain name to paths, use this and always use it or never.
  • -v for verbose output so we can see what is going on.
  • -n to check if an account key exists and create if not (no reason to omit)
  • -N to check if a domain key exists and create if not (also no reason to omit)
  • -C to specify the path to the challenge dir.  These guides all assume a centralized challenge dir outside the main serving path, and to which we redirect via an alias directive.
  • -F which forces the recreation of certs even if they haven’t expired (this counts against your 10 per 3 hours limit)
  • -s which redirects the process to the Let’s Encrypt staging server, which has no volume limits but also doesn’t create certs browsers accept.  (Using this is fine, but requires cleanup to switch to the production server, see below)
  • -e which is used to add a SAN to the certificate.  Removing one is a bit more involved (see below).

Automating Registration

Lets say you have a lot of domains, you might want to automate the process.  I modified the renewal script to automate the registration process.  This saved some time, but one quirk is you can only register 10 domains (certificates, including SANs, basically 10 lines of the domains list) per 3 hours (they say-I found it takes more like 12 hours to be allowed to register more).

First create  a file with all the domains you want to register for a Let’s Encrypt certificate in the same format as the renewal script uses (it can be the same file, but I made it different as I was experimenting)

# ee /usr/local/etc/acme/newdomains.txt
domain.com www.domain.com
domain2.com www.domain2.com
domain3.com www.domain3.com 
(save)

# ee /usr/local/etc/acme/acme-client-bulk-add.sh

#!/bin/sh

###
#
# This script was adapted by Richard Fassett from letskencrypt.sh
# by Bernard Spil
# See https://brnrd.eu/security/2016-12-30/acme-client.html
#
# and updated again from richard fassett's script at
# https://web.archive.org/web/20180813162024/https://www.richardfassett.com/2017/01/16/using-lets-encrypt-with-acme-client-on-a-freebsd-11apache-2-4/
#
# this requires a file called /usr/local/etc/acme/newdomains.txt of the format
# domain.tld sub.domain.tld alt.domain.tld
# domain2.tld 
# domaind3.tld sub.domain3.tld 
# etc
#
# This should only be run to bulk-add domains.
###

# Define location of dirs and files
DOMAINSFILE="/usr/local/etc/acme/newdomains.txt"
CHALLENGEDIR="/usr/local/www/.well-known/acme-challenge"

# Loop through the newdomains.txt file with lines like
# example.org www.example.org img.example.org
cat ${DOMAINSFILE} | while read domain subdomains ; do

  # Create the cert directory with the command
  # acme-client -mvnNC /usr/local/www/.well-known/acme-challenge (domain subdomains)
  
  acme-client -mvnN -C "${CHALLENGEDIR}" ${domain} ${subdomains}

done

# chmod +x /usr/local/etc/acme/acme-client-bulk-add.sh

A few fixes/recoveries that might be useful at this point: add SAN, remove SAN, switch from staging to production Let’s Encrypt servers.

Automation can break things, you might find you adjusted a few domains incorrectly or want to add a SAN later.

If you need to redo a domain from scratch, for example if you use the “s” option which created a cert from the staging server that doesn’t have volume limits (maybe you’re testing a lot of domains or trying to debug a particularly tricky .htaccess or DNS condition) – you might create a domain with acme-client -mvnsNC /usr/local/www/.well-known/acme-challenge domain.com www.domain.com and then want to generate the production cert.  You also need to do this to remove a SAN.  If you try without deleting the directories, you’ll get something like unknown SAN entry. (You replace “domain.com” with your domain.)

# setenv DD domain.com
# rm -r /usr/local/etc/ssl/acme/private/$DD && rm -r /usr/local/etc/acme/$DD && rm -r /usr/local/etc/ssl/acme/$DD && acme-client -mvnFNC /usr/local/www/.well-known/acme-challenge $DD www.$DD

If you need to add a new SAN to an existing domain

acme-client -mvneFNC /usr/local/www/.well-known/acme-challenge domain.com www.domain.com newsub.domain.com

it is the -e that “extends” the certificate.

Step 5: Automating Renewal

You might notice that the duration of the certificate is rather short: 3 months.  You really don’t want to be responding to certificate expired errors every 3 months, so let’s automate the renewal process.  For this you can create two files and store them on your server.  One is the renewal script itself and the other is a list of domains to renew.  This assumes you have more than one domain.  If you only have one domain, this is a bit overkill, but it will work, so why not?  You might get more domains in the future.   Everyone does.

First create a file with your list of domains, call it something creative like “domains.txt”  This is really a certificate request list with the “primary” domain and Subject Alternative Names (SANs) each on a single line.  In theory the SANs can be all over the place and Let’s Encrypt allows up to 100 per certificate (quite a lot), so the implication of “domains.txt” naming is a bit inaccurate, but that’s what everyone is using so we won’t be contrary.  You have to make sure that all the subdomains resolve—the Let’s Encrypt servers are going to look them up via DNS and if there aren’t working entries, this will fail with one of the errors above.  Check first.  I have not tested whether, if for example, you own domain.com, domain.org, and domain.net and they all point to the same directory, you can use one cert with different TLDs (or domains) as SANs; you should be able to, but I didn’t try.

# ee /usr/local/etc/acme/domains.txt

domain.com www.domain.com sub.domain.com sub2.domain.com
domain.org www.domain.org
domain2.com www.domain2.com cats.domain2.com kittens.domain2.com

Now that you’ve saved that, the following script is adapted from a few at the references listed above and works on my server.  I made a few adjustments and corrections (there was a name change for acme-client which hasn’t quite propagated through all the HowTos yet).

# ee /usr/local/etc/acme/acme-client-update.sh
#!/bin/sh

###
#
# This script was adapted from letskencrypt.sh by Bernard Spil
# See https://brnrd.eu/security/2016-12-30/acme-client.html
# ... and further modified by David Gessel  
# This script will fail if the directories haven't been set up and
# domains in domain.txt have been successfully verified
#
###

# Define location of dirs and files
DOMAINSFILE="/usr/local/etc/acme/domains.txt"
CHALLENGEDIR="/usr/local/www/.well-known/acme-challenge"

# is changed to 1 if any domains expired and were renewed
CHECKEXPIRATION=0

# Loop through the domains.txt file with lines like
# example.org www.example.org img.example.org
cat ${DOMAINSFILE} | while read domain subdomains ; do

    # acme-client returns RC=2 when certificates 
    # weren't changed; use set +e to capture the return code
    set +e
    # Renew the key and certs if required
    acme-client -mvb -C "${CHALLENGEDIR}" ${domain} ${subdomains}
	RC=$?
	
   # now that we have the return code, set script to exit if 
   # nonzero is returned
   set -e

   # if anything is expired, we'll want to do something 
   # (e.g., restart HTTPS)
   if [ $RC -ne 2 ] ; then
        CHECKEXPIRATION=1
   fi
done

if [ "$CHECKEXPIRATION" -ne "0" ] ; then
        service apache24 restart
fi

# chmod +x /usr/local/etc/acme/acme-client-update.sh

This works quite well and will walk through your domains and renew as needed.

I have 36 domain/certificate lines in my “domains.txt” file and timing this script it takes 2.13 seconds to execute on my server.  There’s no real problem running it every night and if you have a lot of domains, you should remember you can only get 10 certs at a time and they won’t renew for about a week before expiry, a limitation I ran into in the bulk setup process.  You can spread your domain renewals out over the three months by force renewing blocks of them if you have more than about 60 per server.

You probably want to automate the process as a cron job. But before we do, lets address one more little problem: one of the shortcomings of the script process below is that the output messages of the script are output to stdout and only cron’s stderr is emailed to the admin. If your shell environment is wrong or the path to the script is wrong, cron will tell you, but if your domains don’t resolve or the script can’t reach /.well-known/, you will not get any warnings. That’s might be a bummer. So I redirect the output of the client-update.sh script to a log file. It gets overwritten with each execution, so it doesn’t need to be rotated – it is just the output of the last execution. It should be filled with lines including “adding SAN” (which it tells you for each domain) and “certificate valid” which it tells you for each cert that doesn’t need to be renewed. But it might tell you something else, like it barfed trying to reach the /.well-known/ directory because, say, you messed around with .htaccess or forgot to renew your domain and it is being redirected to parking or something. The following script first checks to see if there are any lines in /var/log/lets-encrypt-renew other than the expected, and if so, emails just those lines. You shouldn’t get anything until renewal time or if there’s an error. If you don’t care about renewal notices, you can edit the script to ignore those too.

# ee /usr/local/etc/acme/acme-client-errors.sh
#!/bin/sh

###
# this script scans the log file created by the renewal execution cron job
# then removes any lines containing "adding SAN" or "certificate valid", which
# are normal messages, and mails whatever is left over using the "mail" command
# check full paths (or use relative) but full paths can avoid some errors
# use "# which grep" and "# which mail" on your system to check.

PROBLEM=0

/usr/bin/grep -v "adding SAN" /var/log/lets-encrypt-renew | \
/usr/bin/grep -v "certificate valid" | /usr/bin/cat | \
{ while read status
  do
       PROBLEM=1
  done

  if [ "$PROBLEM" -ne "0" ] ; then
        /usr/bin/grep -v "adding SAN" /var/log/lets-encrypt-renew | \
        /usr/bin/grep -v "certificate valid" | \
        /usr/bin/mail -s "Lets Encrypt Errors" gessel@blackrosetech.com $1
  fi
}

# chmod +x /usr/local/etc/acme/acme-client-errors.sh

My cron configuration is set up as

# crontab -e

#*     *     *   *    *        command to be executed
#-     -     -   -    -
#|     |     |   |    |
#|     |     |   |    +----- day of week (0 - 6) (Sunday=0)
#|     |     |   +------- month (1 - 12)
#|     |     +--------- day of        month (1 - 31)
#|     +----------- hour (0 - 23)
#+------------- min (0 - 59)

MAILTO=gessel
# expanded path
PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/games:/usr/local/sbin:/usr/local/bin
SHELL=/bin/csh
# Let's Encrypt renewal check
*       3       *       *       *        /usr/local/etc/acme/acme-client-update.sh > & /var/log/lets-encrypt-renew
*       4       *       *       *        /usr/local/etc/acme/acme-client-errors.sh

Note that this requires that mail works. On servers that aren’t serving email, I use SSMTP and configured it more or less following this guide https://www.freebsd.org/doc/handbook/outgoing-only.html and https://www.davd.eu/freebsd-send-mails-over-an-external-smtp-server/ and this https://www.debarbora.com/freebsd-10-1-setup-ssmtp-for-outgoing-mail/ especially the tip about using # chpass to change the default Full Name for root from “Charlie &” to something useful like “ServerName Root.”

You can test the mail function by adding a random word (or domain) to your domains.txt file and then executing

# /usr/local/etc/acme/acme-client-update.sh > & /var/log/lets-encrypt-renew
# /usr/local/etc/acme/acme-client-errors.sh

If everything is set up right, you’ll get an email complaining about your random word not being valid.  If you restore the correct domains.txt file and execute the above two commands you should not get an email at all.

# more /var/log/lets-encrypt-renew

should show only lines with “adding SAN” and “certificate valid” in them. If you execute # /usr/local/etc/acme/acme-client-errors.sh you shouldn’t get any message.

.htaccess Problems

If you’re controlling access to a directory or have some non-HTML style process listening, you might run into challenges giving the Let’s Encrypt server access to the /.well-known/ directory.  I found the following formulation worked:

AuthType Basic
     AuthName "Please login."
     AuthUserFile "/xxx/.htpasswd"
     # the directive below also "requires" that the requested URL include /.well-known/
        Require expr %{REQUEST_URI} =~ m#^/.well-known/.*#
        Require valid-user

Basically the script above allows (requires) a “valid-user” (one with an entry in the AuthUserFile and valid matching password) and also requires (allows) a URL that is going to /.well-known/ and subdirectories thereof.  This also works in /usr/local/etc/apache24/httpd.conf and /usr/local/etc/apache24/extra/httpd-vhosts.conf

modRewrite to HTTPS problems

You can also create problems by rewriting to HTTPS.  You might want to do this now that you have certs that will auto-renew and you can provide a secure experience for everyone.   In order to get to the /.well-known/ directory, you have to add an exception to the mod-write rule for traffic to this subdirectory like so:

	RewriteEngine on
           RewriteCond %{REQUEST_URI} !^/\.well\-known/acme\-challenge/
	RewriteCond %{HTTPS} off
            RewriteRule (.*) https://%{SERVER_NAME}/$1 [R=301,L]

Also, if you redirect on a 404, some formulations cause problems. This one does not seem to:

ErrorDocument 404 /index.php


Posted at 06:43:58 GMT-0700

Category: FreeBSDHowToSecurityTechnology

Open letter to the FCC 5 regarding net neutrality

Saturday, November 25, 2017 

I’m in favor of net neutrality for a lot of reasons; a personal reason is that I rely on fair and open transport of my bits to work overseas.  If you happen to find this little screed, you can also thank net neutrality for doing so as any argument for neutrality will likely be made unavailable by the ISPs that should charge exorbitant rents for their natural monopolies and would be remiss in their fiduciary responsibility should they fail to take every possible step to maximize shareholder value, for example by permitting their customers access to arguments contrary to their financial or political interests.

I sent the following to the FCC 5.  I am not, I’m sorry to say, optimistic.


Please protect Net Neutrality. It is essential to my ability to operate in Iraq, where I run a technical security business that relies on access to servers and services in the United States. If access to those services becomes subject to a maze of tiered access limitations and tariffs, rather than being treated universally as flat rate data, my business may become untenable unless I move my base of operations to a net neutrality-respecting jurisdiction. The FCC is, at the moment, the only bulwark against a balkanization of data and the collapse of the value premise of the Internet.

While I understand and am sympathetic to both a premise that less government regulation is better in principal and that less regulated markets can be more efficient; this “invisible hand” only works to the benefit in a “well regulated market.” There are significant cases where market forces cannot be beneficial, for example, where the fiduciary responsibility of a company to maximize share-holder value compels exploitation of monopoly rents to the fullest extent permitted by law and, where natural monopolies exist, only regulation prevents those rents from becoming abusive. Delivery of data services is a clear example of one such case, both due to the intrinsic monopoly of physical deployment of services through public resources and due to inherent opportunities to exert market distorting biases into those services to promote self-beneficial products and inhibit competition. That this might happen is not idle speculation: network services companies have routinely attempted to unfairly exploit their positions to their benefit and to the harm of fair and open competition and in many cases were restrained only by existing net neutrality laws that the FCC is currently considering rescinding. The consequences of rescinding net neutrality will be anti-competitive, anti-productive, and will stifle innovation and economic growth.

While it is obvious and inevitable that network companies will abuse their natural monopolies to stifle competition, as they have attempted many times restrained only by previous FCC enforcement of the principal of net neutrality, rescinding net neutrality also poses a direct risk to the validity of democracy. While one can argue that Facebook has already compromised democracy by becoming the world’s largest provider of news through an extraordinarily easily manipulated content delivery mechanism, there’s no evidence that they have yet exploited this to achieve any particular political end nor actively censored criticism of their practices. However, without net neutrality there is no legal protection to inhibit carriers from exploiting their control over content delivery to promote their corporate or political interests while censoring embarrassing or opposing information. As the vast majority of Americans now get their news from on-line resources, control over the delivery of those resources becomes an extraordinarily powerful political weapon; without net neutrality it is perfectly legal for corporations to get “their hands on those weapons” and deploy them against their economic and political adversaries.

Under an implicit doctrine of net neutrality from a naive, but then technically accurate, concept of the internet as a packet network that would survive a nuclear war and that would treat censorship as “damage” and “route around it automatically,” to 2005’s Madison River ruling, to the 2008 Comcast ruling, to 2010’s Open Internet Order the internet has flourished as an open network delivering innovative services and resources that all businesses have come to rely on fairly and equally. Overturning that historical doctrine will result in a digital communications landscape in the US that resembles AT&Ts pre-breakup telephone service: you will be permitted to buy only the services that your ISP deems most profitable to themselves. In the long run, if net neutrality is not protected, one can expect the innovation that has centered in the US since the birth of the internet, which some of us remember as the government sponsored innovation ARPAnet, to migrate to less corporatist climates, such as Europe, where net neutrality is enshrined in law.

The American people are counting on you to protect us from such a catastrophic outcome.

Do not reverse the 2015 Open Internet Order.

Sincerely,

David Gessel

cc:
Mignon.Clyburn@fcc.gov
Brendan.Carr@fcc.gov
Mike.O’Reilly@fcc.gov
Ajit.Pai@fcc.gov
Jessica.Rosenworcel@fcc.gov

Posted at 10:27:50 GMT-0700