HowTo

Let’s Encrypt….

Sunday, December 10, 2017 

Let’s encrypt, why not?

Wanna know how I did it for FreeBSD/Apache/acme-client, jump below.

Let’s encrypt is a service from the fine people at Mozilla, who, when they’re not trying to prove that Firefox can be a Chrome clone, do some really good stuff. Certificates are what give you the little warm fuzzy feeling of a green lock icon, and when properly configured, avoid giving you that terrifying feeling that something horrible is about to happen if you visit a site with an expired one or a self-signed one.

There are some huge structural problems in the certificate concept that seem to exist only to validate the certificate mafia, that can charge $100s per year for a validated certificate, as if executing the script to issue one was somehow expensive. It is not, you can generate one yourself that provides exactly the same security as one provided by a big company that gets their root certs distributed in a browser, but browsers reject these with scary messages so webmasters have to keep buying them.

Now there’s a theory behind why they’re ripping you off: the premise is that the certificate verifies the site is who it says it is – that if you go to mybank.com, you’re actually visiting your real bank, not being redirected by a man-in-the-middle attack to some fake landing page to harvest your passwords, log into your account, and steal all your cats.  There are a few problems with this:

  • Nobody actually checks a URL so while a certificate sort of adds some weight to the probability that mybank.com is owned by mybank, not some hacker a few tables over ARP poisoning the cafe wifi, it doesn’t do anything if you click on a link to mibank.com.
  • The companies that claim to check IDs and verify owners, do not.  That would cost money. You think they’re gonna actually do that?  No… (CAcert actually does, but they don’t get a root cert because… they do it for free. And don’t have Mozilla’s money and clout.)
  • Stealing a root cert private key can generate significant LOLZ; it happens a lot.
  • Law enforcement the world over has “lawful intercept” certs.  You’re probably on some country’s poop list if you have ever used social media. Their laws permit intercepting your communications.  Some country’s laws somewhere certainly do no matter who you are.
  • But dang, those annoying warnings that do nothing to secure you mean that people who publish a website just for the good of the planet either have to pay up, go through a lot of hassle, or leave their user’s content streams exposed to the world’s prying eyes…

…Until Let’s Encrypt came along.  It is a lovely little set of tools and services that not only issue browser-accepted certs (see the green lock?) but also automate renewal.  They basically check that you have enough control over your website to let a script write a file that that they can read back and verify, and if so, you’re who you say you are: the person with write access to the server powering the website they’re giving the certificate too.  That’s all anyone can really do, and is as secure as any other cert there is for identification of a site: that is except for stolen certs, url typos, law enforcement certs, or malicious code on your computer, if you visit https://blackrosetech.com and you don’t get any warnings, you’re probably reading data coming off my computer and not some hacker pretending to be me.

I got Let’s Encrypt to work, but it took some modifications of the existing guides, and I think the service is a good thing that more people should use, so in the spirit of investing some of my resources into the great shared experiment that is Open Source, here’s my How To:


Upstream Guides:

I found these two guides extremely helpful.

https://www.richardfassett.com/2017/01/16/using-lets-encrypt-with-acme-client-on-a-freebsd-11apache-2-4/

https://brnrd.eu/security/2016-12-30/acme-client.html

Step 1: Installing the certificate generation tool

There are a few different software tools to manage the Let’sEncrypt process.  I elected to use Kristaps Dzonsons acme-client, ported to FreeBSD by Bernard Spil.

I was using OpenSSL on my site.  Bernard and Kristaps have some strong opinions on OpenSSL and heartbleed and a few other problems and therefore require LibreSSL.   If you’re using it already, great.  If not, you’ll have to install it.  It wasn’t too terrible, but I ran into a few issues:

https://wiki.freebsd.org/LibreSSL
Or, easy peasy https://ootput.github.io/2016/07/20/Switching-to-LibreSSL/

# ee /etc/make.conf
set  DEFAULT_VERSIONS+= ssl=libressl
# portmaster -od security/libressl security/openssl
# portmaster -rd security/libressl

if that fails with

===>>> The argument to -r must be a package name, or a glob pattern

Then try:

# pkg version -v | grep libre
libressl-2.6.3 = up-to-date with index
# portmaster -rd libressl-2.6.3
or for a complete refresh
# portmaster -Rafd

Curl will probably fail with LibreSSL (and with the latest, if it has brotli support enabled).  Check the google to see if these fixes are still needed, or just:

# cd /usr/ports/ftp/curl
# make config

disable TLS-SRP  https://forums.freebsd.org/threads/56917/

ftp/curl 7.75.0 has an issue with pied piper brotli, which requires modifying the makefile to build --without-brotli as indicated in comment #2 

(Sunpoet, the curl port maintainer, got back to me with an update: when PR/223966 is integrated in Brotli, he will add an optional Brotli support flag and it should work fine at that point without the Makefile edit.)

Step 2: Actually installing acme-client

The really easy part: you should be able to

# portmaster security/acme-client

and be on your way to configuration heaven.

Step 3: Initial configuration

The defaults for acme-client expect certain directories to exist and the installer doesn’t create them.

# mkdir -pm750 /usr/local/www/.well-known && chown -R www:www /usr/local/www/.well-known
# mkdir -pm750 /usr/local/www/.well-known/acme-challenge && chown -R www:www /usr/local/www/.well-known/acme-challenge

The how-to’s seemed to forget the last one.

And make a modification to your httpd.conf file to permit the Let’s Encrypt servers to have access to these folders:

# ee /usr/local/etc/apache24/httpd.conf

add the following:

# Lets Encrypt challenge directory configured per 
# https://brnrd.eu/security/2016-12-30/acme-client.html
<Directory "/usr/local/www/.well-known/">
        Options None
        AllowOverride None
        Require all granted
        Header add Content-Type text/plain
</Directory>

And, for each VHOST that is going to get a cert:

# ee /usr/local/etc/apache24/extra/httpd-vhosts.conf

add to each non-ssl VHOST definition the following:

Alias /.well-known/ /usr/local/www/.well-known/

such that you end up with something like (yours may be different, especially watch out for BasicAuth or ModRewrite, addressed further down):

<VirtualHost IP.NU.MB.ER:80>
    ServerName domain.com
    ServerAdmin admin@domain.com
    DocumentRoot /usr/local/www/data-dist/domain-root
    ServerAlias *.domain.com www.domain.com
    Alias /.well-known/ /usr/local/www/.well-known/
    ErrorLog /var/log/domain-error_log
    CustomLog /var/log/domain-access_log combined
    ScriptAlias /cgi-prg /www/cgi-prg
</VirtualHost>

Don’t forget!

# apachectl restart

Step 4: First Try

At this point the system should be configured sufficiently to do a trial run with a single domain from the command line. Later on there are some scripts that will automate the process of both converting a large number of VHOSTed domains on a server to Let’s Encrypt and for maintaining them and getting email notifications if anything goes wrong in the, hopefully, fully automatic renewal process.

# acme-client -mvnNC /usr/local/www/.well-known/acme-challenge domain.com www.domain.com

This should create all the directories still needed and populate them, then check with the lets encrypt server and get a certificate and install it in the right place.  Inshalla.

If you get something like

acme-client: transfer buffer: [{ "type": "urn:acme:error:malformed","detail": "Provided agreement URL[https://letsencrypt.org/documents/LE-SA-v1.1.1-August-1-2016.pdf]does not match current agreement URL[https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf]","status": 400 }] (267 bytes)

That means the lets encrypt agreement has changed.  You can’t do much but write the port maintainer or wait for an update.  It will get fixed quickly and should only happen once a year.  I don’t think you’ll get it at all unless you’re unlucky enough to try to update when it is changing.  I was.

More likely you’ll get something like

acme-client: transfer buffer: [{ "type": "http-01", "status": "invalid", "error": { "type": "urn:acme:error:unauthorized", "detail": "Invalid response from http://www.domain.com/.well-known/acme-challenge/evReZz6s1uSVZbgEVdKkWElx_NHb3NmbbwGbADUwRtQ: (etc...)

This means there’s a problem accessing the /.well-known/ directory by the server.  There can be a lot of reasons for this:

  • You didn’t restart apache # apachectl restart
  • There was an error in the config file (look at the output of the restart) and therefore apache didn’t actually reaload with your new config.
  • DNS isn’t pointing where you think it is pointing.  Check with nslookup/whois to make sure.  Really.
  • You have the directories protected in some way – like with .htaccess.  (see below)

But if it goes well, you’ll get something like:

acme-client: /usr/local/etc/acme/domain.com/privkey.pem: account key exists (not creating)
acme-client: /usr/local/etc/ssl/acme/private/domain.com/privkey.pem: domain key exists (not creating)
acme-client: https://acme-v01.api.letsencrypt.org/directory: directories
acme-client: acme-v01.api.letsencrypt.org: DNS: 173.223.13.221
acme-client: acme-v01.api.letsencrypt.org: DNS: 2001:418:142b:290::3d5
acme-client: acme-v01.api.letsencrypt.org: DNS: 2001:418:142b:28d::3d5
acme-client: https://acme-v01.api.letsencrypt.org/acme/new-authz: req-auth: domain.com
acme-client: /usr/local/www/.well-known/acme-challenge/_ffVe6jHNHbIG1XKAeoqQmmtryWMGCKsfHIWWkl5lJw: created
acme-client: https://acme-v01.api.letsencrypt.org/acme/challenge/5HKzgB9diS5ecS6WbYJsHeEXsSWZeMhdYFmMfN9voHA/2673529867: challenge
acme-client: https://acme-v01.api.letsencrypt.org/acme/challenge/5HKzgB9diS5ecS6WbYJsHeEXsSWZeMhdYFmMfN9voHA/2673529867: status
acme-client: https://acme-v01.api.letsencrypt.org/acme/new-cert: certificate
acme-client: http://cert.int-x3.letsencrypt.org/: full chain
acme-client: cert.int-x3.letsencrypt.org: DNS: 184.23.159.176
acme-client: cert.int-x3.letsencrypt.org: DNS: 184.23.159.177
acme-client: cert.int-x3.letsencrypt.org: DNS: 2001:5a8:100::b817:9fb0
acme-client: cert.int-x3.letsencrypt.org: DNS: 2001:5a8:100::b817:9fb1
acme-client: /usr/local/etc/ssl/acme/domain.com/chain.pem: created
acme-client: /usr/local/etc/ssl/acme/domain.com/cert.pem: created
acme-client: /usr/local/etc/ssl/acme/domain.com/fullchain.pem: created

Yay, you’ve got certs!  Now update your vhosts file to point to the certs you just created.  You may need to add a 443 container or, if it exists, update it to point to the new certs and restart apache.

# ee /usr/local/etc/apache24/extra/httpd-vhosts.conf

<VirtualHost IP.NU.MB.ER:443>
      ServerName domain.com
      ServerAdmin admin@domain.com
      DocumentRoot /usr/local/www/domainroot
      ServerAlias domain.com sub.domain.com
      SSLCertificateFile /usr/local/etc/ssl/acme/domain.com/cert.pem
      SSLCertificateKeyFile /usr/local/etc/ssl/acme/private/domain.com/privkey.pem
      SSLCertificateChainFile /usr/local/etc/ssl/acme/domain.com/fullchain.pem
      Header set Strict-Transport-Security "max-age=31536000; includeSubDomains"
      ErrorLog /var/log/domain-error_log
      CustomLog /var/log/domain-access_log combined
</VirtualHost>

save and restart, look for any errors (typos on directory paths etc. will be detected and apache won’t restart, but be aware, it won’t quit either).

# apachectl restart
 Performing sanity check on apache24 configuration:
 Syntax OK
 Stopping apache24.
 Waiting for PIDS: 81160.
 Performing sanity check on apache24 configuration:
 Syntax OK
 Starting apache24.

Navigate to https://domain.com/ and check out your new green lock.  Check security and you should find:

W00T!

Acme-Client Options

# man acme-client has all the deets, but we’re using:

  • -m to append the domain name to paths, use this and always use it or never.
  • -v for verbose output so we can see what is going on.
  • -n to check if an account key exists and create if not (no reason to omit)
  • -N to check if a domain key exists and create if not (also no reason to omit)
  • -C to specify the path to the challenge dir.  These guides all assume a centralized challenge dir outside the main serving path, and to which we redirect via an alias directive.
  • -F which forces the recreation of certs even if they haven’t expired (this counts against your 10 per 3 hours limit)
  • -s which redirects the process to the Let’s Encrypt staging server, which has no volume limits but also doesn’t create certs browsers accept.  (Using this is fine, but requires cleanup to switch to the production server, see below)
  • -e which is used to add a SAN to the certificate.  Removing one is a bit more involved (see below).

Automating Registration

Lets say you have a lot of domains, you might want to automate the process.  I modified the renewal script to automate the registration process.  This saved some time, but one quirk is you can only register 10 domains (certificates, including SANs, basically 10 lines of the domains list) per 3 hours (they say-I found it takes more like 12 hours to be allowed to register more).

First create  a file with all the domains you want to register for a Let’s Encrypt certificate in the same format as the renewal script uses (it can be the same file, but I made it different as I was experimenting)

# ee /usr/local/etc/acme/newdomains.txt
domain.com www.domain.com
domain2.com www.domain2.com
domain3.com www.domain3.com 
(save)

# ee /usr/local/etc/acme/acme-client-bulk-add.sh

#!/bin/sh

###
#
# This script was adapted by Richard Fassett from letskencrypt.sh
# by Bernard Spil
# See https://brnrd.eu/security/2016-12-30/acme-client.html
#
# and updated again from richard fassett's script at
# https://www.richardfassett.com/2017/01/16/using-lets-encrypt-with-acme-client-on-a-freebsd-11apache-2-4/#comment-282
#
# this requires a file called /usr/local/etc/acme/newdomains.txt of the format
# domain.tld sub.domain.tld alt.domain.tld
# domain2.tld 
# domaind3.tld sub.domain3.tld 
# etc
#
# This should only be run to bulk-add domains.
###

# Define location of dirs and files
DOMAINSFILE="/usr/local/etc/acme/newdomains.txt"
CHALLENGEDIR="/usr/local/www/.well-known/acme-challenge"

# Loop through the newdomains.txt file with lines like
# example.org www.example.org img.example.org
cat ${DOMAINSFILE} | while read domain subdomains ; do

  # Create the cert directory with the command
  # acme-client -mvnNC /usr/local/www/.well-known/acme-challenge (domain subdomains)
  
  acme-client -mvnN -C "${CHALLENGEDIR}" ${domain} ${subdomains}

done

# chmod +x /usr/local/etc/acme/acme-client-bulk-add.sh

A few fixes/recoveries that might be useful at this point: add SAN, remove SAN, switch from staging to production Let’s Encrypt servers.

Automation can break things, you might find you adjusted a few domains incorrectly or want to add a SAN later.

If you need to redo a domain from scratch, for example if you use the “s” option which created a cert from the staging server that doesn’t have volume limits (maybe you’re testing a lot of domains or trying to debug a particularly tricky .htaccess or DNS condition) – you might create a domain with acme-client -mvnsNC /usr/local/www/.well-known/acme-challenge domain.com www.domain.com and then want to generate the production cert.  You also need to do this to remove a SAN.  If you try without deleting the directories, you’ll get something like unknown SAN entry. (You replace “domain.com” with your domain.)

# setenv DD domain.com
# rm -r /usr/local/etc/ssl/acme/private/$DD && rm -r /usr/local/etc/acme/$DD && rm -r /usr/local/etc/ssl/acme/$DD && acme-client -mvnFNC /usr/local/www/.well-known/acme-challenge $DD www.$DD

If you need to add a new SAN to an existing domain

acme-client -mvneFNC /usr/local/www/.well-known/acme-challenge domain.com www.domain.com newsub.domain.com

it is the -e that “extends” the certificate.

Step 5: Automating Renewal

You might notice that the duration of the certificate is rather short: 3 months.  You really don’t want to be responding to certificate expired errors every 3 months, so let’s automate the renewal process.  For this you can create two files and store them on your server.  One is the renewal script itself and the other is a list of domains to renew.  This assumes you have more than one domain.  If you only have one domain, this is a bit overkill, but it will work, so why not?  You might get more domains in the future.   Everyone does.

First create a file with your list of domains, call it something creative like “domains.txt”  This is really a certificate request list with the “primary” domain and Subject Alternative Names (SANs) each on a single line.  In theory the SANs can be all over the place and Let’s Encrypt allows up to 100 per certificate (quite a lot), so the implication of “domains.txt” naming is a bit inaccurate, but that’s what everyone is using so we won’t be contrary.  You have to make sure that all the subdomains resolve—the Let’s Encrypt servers are going to look them up via DNS and if there aren’t working entries, this will fail with one of the errors above.  Check first.  I have not tested whether, if for example, you own domain.com, domain.org, and domain.net and they all point to the same directory, you can use one cert with different TLDs (or domains) as SANs; you should be able to, but I didn’t try.

# ee /usr/local/etc/acme/domains.txt

domain.com www.domain.com sub.domain.com sub2.domain.com
domain.org www.domain.org
domain2.com www.domain2.com cats.domain2.com kittens.domain2.com

Now that you’ve saved that, the following script is adapted from a few at the references listed above and works on my server.  I made a few adjustments and corrections (there was a name change for acme-client which hasn’t quite propagated through all the HowTos yet).

# ee /usr/local/etc/acme/acme-client-update.sh

#!/bin/sh

###
#
# This script was adapted from letskencrypt.sh by Bernard Spil
# See https://brnrd.eu/security/2016-12-30/acme-client.html
# ... and further modified by David Gessel  
# This script will fail if the directories haven't been set up and
# domains in domain.txt have been successfully verified
#
###

# Define location of dirs and files
DOMAINSFILE="/usr/local/etc/acme/domains.txt"
CHALLENGEDIR="/usr/local/www/.well-known/acme-challenge"

# is changed to 1 if any domains expired and were renewed
CHECKEXPIRATION=0

# Loop through the domains.txt file with lines like
# example.org www.example.org img.example.org
cat ${DOMAINSFILE} | while read domain subdomains ; do

    # acme-client returns RC=2 when certificates 
    # weren't changed; use set +e to capture the return code
    set +e
    # Renew the key and certs if required
    acme-client -mvb -C "${CHALLENGEDIR}" ${domain} ${subdomains}
	RC=$?
	
   # now that we have the return code, set script to exit if 
   # nonzero is returned
   set -e

   # if anything is expired, we'll want to do something 
   # (e.g., restart HTTPS)
   if [ $RC -ne 2 ] ; then
        CHECKEXPIRATION=1
   fi
done

if [ "$CHECKEXPIRATION" -ne "0" ] ; then
        service apache24 restart
fi

# chmod +x /usr/local/etc/acme/acme-client-update.sh

This works quite well and will walk through your domains and renew as needed.

I have 36 domain/certificate lines in my “domains.txt” file and timing this script it takes 2.13 seconds to execute on my server.  There’s no real problem running it every night and if you have a lot of domains, you should remember you can only get 10 certs at a time and they won’t renew for about a week before expiry, a limitation I ran into in the bulk setup process.  You can spread your domain renewals out over the three months by force renewing blocks of them if you have more than about 60 per server.

You probably want to automate the process as a cron job. But before we do, lets address one more little problem: one of the shortcomings of the script process below is that the output messages of the script are output to stdout and only cron’s stderr is emailed to the admin. If your shell environment is wrong or the path to the script is wrong, cron will tell you, but if your domains don’t resolve or the script can’t reach /.well-known/, you will not get any warnings. That’s might be a bummer. So I redirect the output of the client-update.sh script to a log file. It gets overwritten with each execution, so it doesn’t need to be rotated – it is just the output of the last execution. It should be filled with lines including “adding SAN” (which it tells you for each domain) and “certificate valid” which it tells you for each cert that doesn’t need to be renewed. But it might tell you something else, like it barfed trying to reach the /.well-known/ directory because, say, you messed around with .htaccess or forgot to renew your domain and it is being redirected to parking or something. The following script first checks to see if there are any lines in /var/log/lets-encrypt-renew other than the expected, and if so, emails just those lines. You shouldn’t get anything until renewal time or if there’s an error. If you don’t care about renewal notices, you can edit the script to ignore those too.

# ee /usr/local/etc/acme/acme-client-errors.sh

#!/bin/sh

###
# this script scans the log file created by the renewal execution cron job
# then removes any lines containing "adding SAN" or "certificate valid", which
# are normal messages, and mails whatever is left over using the "mail" command
# check full paths (or use relative) but full paths can avoid some errors
# use "# which grep" and "# which mail" on your system to check.

PROBLEM=0

/usr/bin/grep -v "adding SAN" /var/log/lets-encrypt-renew | \
/usr/bin/grep -v "certificate valid" | /usr/bin/cat | \
{ while read status
  do
       PROBLEM=1
  done

  if [ "$PROBLEM" -ne "0" ] ; then
        /usr/bin/grep -v "adding SAN" /var/log/lets-encrypt-renew | \
        /usr/bin/grep -v "certificate valid" | \
        /usr/bin/mail -s "Lets Encrypt Errors" gessel@blackrosetech.com $1
  fi
}

# chmod +x /usr/local/etc/acme/acme-client-errors.sh

My cron configuration is set up as

# crontab -e

#*     *     *   *    *        command to be executed
#-     -     -   -    -
#|     |     |   |    |
#|     |     |   |    +----- day of week (0 - 6) (Sunday=0)
#|     |     |   +------- month (1 - 12)
#|     |     +--------- day of        month (1 - 31)
#|     +----------- hour (0 - 23)
#+------------- min (0 - 59)

MAILTO=gessel
# expanded path
PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/games:/usr/local/sbin:/usr/local/bin
SHELL=/bin/csh
# Let's Encrypt renewal check
*       3       *       *       *        /usr/local/etc/acme/acme-client-update.sh > & /var/log/lets-encrypt-renew
*       4       *       *       *        /usr/local/etc/acme/acme-client-errors.sh

Note that this requires that mail works. On servers that aren’t serving email, I use SSMTP and configured it more or less following this guide https://www.freebsd.org/doc/handbook/outgoing-only.html and https://www.davd.eu/freebsd-send-mails-over-an-external-smtp-server/ and this https://www.debarbora.com/freebsd-10-1-setup-ssmtp-for-outgoing-mail/ especially the tip about using # chpass to change the default Full Name for root from “Charlie &” to something useful like “ServerName Root.”

You can test the mail function by adding a random word (or domain) to your domains.txt file and then executing

# /usr/local/etc/acme/acme-client-update.sh > & /var/log/lets-encrypt-renew
# /usr/local/etc/acme/acme-client-errors.sh

If everything is set up right, you’ll get an email complaining about your random word not being valid.  If you restore the correct domains.txt file and execute the above two commands you should not get an email at all.

# more /var/log/lets-encrypt-renew

should show only lines with “adding SAN” and “certificate valid” in them. If you execute # /usr/local/etc/acme/acme-client-errors.sh you shouldn’t get any message.

.htaccess Problems

If you’re controlling access to a directory or have some non-HTML style process listening, you might run into challenges giving the Let’s Encrypt server access to the /.well-known/ directory.  I found the following formulation worked:

AuthType Basic
     AuthName "Please login."
     AuthUserFile "/xxx/.htpasswd"
     # the directive below also "requires" that the requested URL include /.well-known/
        Require expr %{REQUEST_URI} =~ m#^/.well-known/.*#
        Require valid-user

Basically the script above allows (requires) a “valid-user” (one with an entry in the AuthUserFile and valid matching password) and also requires (allows) a URL that is going to /.well-known/ and subdirectories thereof.  This also works in /usr/local/etc/apache24/httpd.conf and /usr/local/etc/apache24/extra/httpd-vhosts.conf

modRewrite to HTTPS problems

You can also create problems by rewriting to HTTPS.  You might want to do this now that you have certs that will auto-renew and you can provide a secure experience for everyone.   In order to get to the /.well-known/ directory, you have to add an exception to the mod-write rule for traffic to this subdirectory like so:

	RewriteEngine on
           RewriteCond %{REQUEST_URI} !^/\.well\-known/acme\-challenge/
	RewriteCond %{HTTPS} off
            RewriteRule (.*) https://%{SERVER_NAME}/$1 [R=301,L]

Also, if you redirect on a 404, some formulations cause problems. This one does not seem to:

ErrorDocument 404 /index.php


Posted at 06:43:58 UTC

Category: FreeBSDHowToSecuritytechnology

10 Gbyte Win10 Spyware “upgrade” now forced on users

Sunday, September 27, 2015 

Microsoft has, historically, done some amazingly boneheaded things like clippy, Vista, Win 8, and Win 10.  They have one really good product: Excel, otherwise everything they’ve done has succeeded only through illegal exploitation of an aggressively defended monopoly. OK, maybe the Xbox is competitive, but I’m not much of a gamer.

Sadly for the world, the model of selling users for profit to advertisers and spies has gained ground to the point where Microsoft was starting to look like the least evil major entity in closed-source computing.  Poor microsoft.  To lose the evil crown must be at least as humiliating as their waning revenue and abject failures in the mobile space (so strange… try to enter a space where they don’t have a monopoly to force users to accept their mediocre crap and they fail, who’da thunk it?)

“There is a difference between policy and practice. We don’t read customers mail. We don’t read customer documents. We don’t triangulate YouTube views and searches. We don’t use the content of your Hotmail to target ads in Bing,”

Frank Shaw, Corporate Vice President of Corporate Communications for Microsoft

Well, never fear: Windows 10 is here and they’re radically one-upping the data theft economy by p0wning not just the data you idiotically entrust to someone else’s server for free without ever considering why they’re giving you that useful service for “free” or what they, or whoever buys their ultimately failed business, might do with your data, but also the data you consider too sensitive for the Google or the Apple.  Windows 10 exfiltrates all your data to Microsoft for their use and profit without your information.  Don’t believe it? Read their Privacy Statement.

Finally, we will access, disclose and preserve personal data, including your content (such as the content of your emails, other private communications or files in private folders), when we have a good faith belief that doing so is necessary.

And it is free (as in beer but not as in speech).  What could possiblay go wrong?

Well, people weren’t updating fast enough so Microsoft is now pushing that update on you involuntarily.  Do you have a data cap that a 10G download might break and cost you money?  So what!  Your loss!  Don’t have enough space on your drive for a 10G hidden folder of crapware foisted off on you without your permission?  Tough crap, Microsoft don’t care.

To be clear, Windows 10 is spyware.  If this was coming from a teenage hacker somewhere, they’d be facing jail time.  It is absolutely, unequivocally malware that will create a liability for you if you use it.  If you have any confidentiality requirement, you must not install windows 10.  Ever. Not even on your home machine.  Just don’t.

The only way to prevent this is really annoying and a little risky: disable automatic downloads.  One of the problems with Microsoft’s operating systems is the unbelievably crappy spaghetti code that results in a constant flow of cracks, a week’s worth are patched every Tuesday.  About 1 serious vulnerability every fortnight these days (note this is about the same as Ubuntu and about 1/4 the rate of OSX or iOS, why people think Apple products are “secure” is beyond me – live in that fantasy walled garden!  But nice logo you paid a 50% premium for on your shiny device). Not patching increases the risk that some hacker somewhere will steal your datas, but patching guarantees that Microsoft will steal your datas.  Keep your anti-virus up to date and live a little dangerously by keeping Microsoft out.

Here’s an interesting article: how-to-clean-the-windows-10-crapware-off-your-windows-7-or-81-pc

And a tool referenced in that article: GWX control panel (that can help remove the windows 10 infection if you got it).

And a list of patches I found that are related to Win10 malware that you can remove if you haven’t installed it yet (Windows 10 eliminates the ability to choose or selectively remove patches, once you’re in for the ride, you’re chained in: all or nothing.)

Basic advice:

  • Disable automatic updates and automatic downloads of updates.
  • Review each update Microsoft offers.  This is tedious, my win 7 install reports 384 updates, 5-10 a week, but other than security patches, you probably don’t really need them.  Only install a patch if there’s a reason.  Sorry, that sucks, but there’s always Linux Mint: free like beer AND free like speech.
  • If you’re still on Win 7/8, uninstall the spyware Microsoft has probably already installed.  If you’re on Windows 8, you probably want to upgrade to Windows 7 if at all possible.
  • If you succumbed to the pressure and became a Microsoft Product by installing Windows 10, uninstall it.
  • If uninstall doesn’t work, switch to Mint or reinstall 7.

Most importantly, if you develop software for servers or for end users, stop developing for Microsoft (and Apple too).  Respect the privacy of your customers by not exposing them to exploitation by desperate operating system vendors.  In many classes of applications, your customers buy their computers to run your software: they don’t care what operating system it requires – that should be transparent and painless.  Microsoft is no longer an even remotely acceptable choice.  Server applications should run under FreeBSD or OpenBSD and desktop applications should run under Linux.  You can charge more and generate more profit because the total net cost for your customers will be lower.  Split the difference and give them a more reliable, more secure, and lower cost environment and make more money doing so.

Posted at 08:07:54 UTC

Category: FreeBSDHowToLinuxSecuritytechnology

Successful connect to WPA2 with Linux Mint 17

Saturday, September 26, 2015 

I found myself having odd problems connecting to WPA2 encrypted wireless networks with a new laptop.  There must be more elegant solutions to this problem, but this worked for me.  The problem was that I couldn’t connect to a nearby hotspot secured with WPA2 whether I used the default config tool for mint, Wicd Network Manager, or the command line.  Errors were either “bad password” or the more detailed errors below.

As with any system variation mileage may vary, my errors look like:

wlan0: CTRL-EVENT-SCAN-STARTED 
wlan0: SME: Trying to authenticate with 68:72:51:00:26:26 (SSID='WA-bullet' freq=2462 MHz)
wlan0: Trying to associate with 68:72:51:00:26:26 (SSID='WA-bullet' freq=2462 MHz)
wlan0: Associated with 68:72:51:00:26:26
wlan0: CTRL-EVENT-DISCONNECTED bssid=68:72:51:00:26:26 reason=3 locally_generated=1

and my system config is reported as:

# lspci -vv |grep -i wireless
3e:00.0 Network controller: Intel Corporation Wireless 7260 (rev 6b)
 Subsystem: Intel Corporation Dual Band Wireless-AC 7260
# uname -a
Linux dgzb 3.16.0-38-generic #52~14.04.1-Ubuntu SMP Fri May 8 09:43:57 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

I found useful commands for manually setting up a wpa_supplicant.conf file here, and for disabling 802.11n here. The combination was needed to get things working.

The following successfully connects to a WPA2-secured network:

$ sudo su
$ iw dev
 ... Interface [interfacename] (typically wlan0, assumed below)
$ iw wlan0 scan
 ... SSID: [ssid]
 ... RSN: (if present means the network is secured with WPA2)
$ wpa_passphrase [ssid] >> /etc/wpa_supplicant.conf 
...type in the passphrase for network [ssid] and hit enter...
$ sh -c 'modprobe -r iwlwifi && modprobe iwlwifi 11n_disable=1'
$ wpa_supplicant -i wlan0 -c /etc/wpa_supplicant.conf

(should show CTRL-EVENT-CONNECTED)
(open a new terminal leaving the connection open, ending the command disconnects)

$ sudo su
$ dhclient wlan0

(should be connected now)

Posted at 10:16:28 UTC

Category: HowToLinuxtechnology

Disk Checks for Large Arrays

Friday, August 21, 2015 

If you have a large array of disks attached to your server, which is obviously going to be running FreeBSD or OpenBSD if you care about security, stability, and scalability; there are some tricks for dealing with large numbers of disks (like having 227 4TB disks attached to a single host).

Using Bash (yes there are security issues, but it is powerful)

# for i in `seq 0 227`; do smartctl -t short /dev/da$i; sleep 15; done 1Thanks Jared

executes a short smart test on all disks. Smartctl seems to max out at 32 concurrent tests, so sleep 15 ensures the 3 minute tests are finishing before new ones are executed. If you’re in a hurry, sleep 5 should do the trick and ensure all of them execute.

to get results try something like:

# for i in `seq 0 227`; do echo "/dev/da$i"; smartctl -a /dev/da$i; sleep .5; done

Bulk Fixes

Problem with the disks – need to clear existing formatting?

unmount each disk

# for i in `seq 0 227`; do umount -f /dev/da$i; done

unlock (if needed)

# sysctl kern.geom.debugflags=0x10

Overwrite the start of each disk

# for i in `seq 0 227`; do dd if=/dev/zero of=/dev/da$i bs=1k count=100; done

Overwrite the end of each disk

# for i in `seq 0 227`; do dd if=/dev/zero of=/dev/da$i bs=1m oseek=`diskinfo da$i | awk '{print int($3 / (1024*1024)) - 4;}'`; done

Recreate GPT (for ZFS)

# for i in `seq 0 227`; do gpart create -s gpt /dev/da$i; sleep .5; done

Destroy multipaths

# for i in `seq 1 114`; do gmultipath destroy disk$i; done

Disable multipath completely

# for i in `seq 1 114`; do gmultipath destroy disk$i; done
# gmultipath unload
# mv /boot/kernel-debug/geom_multipath.ko /boot/kernel-debug/geom_multipath.ko.bad
# mv /boot/kernel/geom_multipath.ko /boot/kernel/geom_multipath.ko.bad

Footnotes   [ + ]

1. Thanks Jared
Posted at 12:52:56 UTC

Category: FreeBSDHowTotechnology

A Solution for Mosh Scrollback

Wednesday, July 22, 2015 

Mosh is a pretty good tool, almost indispensable when working in places with crappy internet. While it is designed to help with situations like “LTE on the beach,” it actually works very well in places where internet connectivity is genuinely bad: 1500msec RT, latency, 30% packet loss, and frequent drops in connectivity that last seconds to hours, otherwise known as most of the world. On a good day I lose an SSH connection randomly about every 3-6 hours but I’ve only ever lost a Mosh session when my system went down.

It does a lot of things, but two are key for my use: it syncs user input in the background while local echoing what you type so you can finish your command (and correct a typo) without waiting 1500msec for the remote echo to update; and it creates persistent connections that survive drop off of almost any type except killing the terminal application on one end or the other (anything between can die and when it recovers, you catch up). This means compiles finish and you actually get the output warnings…

…well…

…some of them. Because Mosh’s one giant, glaring, painful, almost debilitating weakness is that it doesn’t support scrollback. So compared to tmux or something else that you can reconnect to after your SSH session drops, you really lose screen content, which is a PITA when ls-ing a directory. I mean, it isn’t that much of an efficiency gain to have to type “ls | less” instead of just “ls” every time you want to see a directory.

I found a solution that works for me. I also use Tmux with Mosh because Tmux will survive a dead client and working with Windows client reboots are a fact of life (I know, sad, but there are some tools I still need on windows, hopefully not for much longer).

Tmux has a facility for creating a local log file, which I then “tail -f” using a separate SSH window. If the SSH client disconnects, no loss, I can pick up the log anytime. It is just mirroring everything that the mosh terminal is doing and the scroll bar scroll back works fine. And it is a raw text file, so you can pipe the output through grep to limit what’s displayed to something of interest and review the log asynchronously as, say, a build is progressing.

Although there are some nice advantages to this, when/if Mosh supports scrollback, it’ll be far more convenient having it in the same window, but for now this is the easiest solution I could come up with.

FreeBSD:

# portmaster sysutils/tmux
# portmaster net/mosh
# ee ~/.tmux.conf
-> bind-key H pipe-pane -o "exec cat >>$HOME/'#W-tmux.log'" \; display-message 'Logging enabled to $HOME/#W-tmux.log'
-> set -g history-limit 30000
Start a Mosh session (for example with Mobaxterm on windows)
# tmux
# [CTRL]-b H
start SSH session (Mobaxterm or Putty on windows)
# tail -f csh-tmux.log
("csh" will be the name of the mosh window - so really "(MoshWindowName)-tmux.log"

You can tmux the ssh session too and still have scrollback and then just reconnect into the same tail command, which preserves the whole scrollback. If you’re on a connection like I’m on, your scrollback logfile will drop off a couple of times a day, but you won’t lose your Mosh session, and it’ll be waiting for you when you’re reminded that you need to see those security warnings from the compile that just scrolled off the Mosh screen forever.

Posted at 00:57:12 UTC

Category: FreeBSDHowToLinuxtechnology

Making Chrome Less Horrible

Saturday, June 13, 2015 

Google’s Chrome is  a useful tool to have around, but the security features have gotten out of hand and make it increasingly useless for real work without actually improving security.

After a brief rant about SSL, there’s a quick solution at the bottom of this post.


 

Chrome’s Idiotic SSL Handling Model

I don’t like Chrome nearly as much as Firefox,  but it does do some things better (I have a persistent annoyance with pfSense certificates that cause slow loading of the pfSense management page in FF, for example). Lately I’ve found that the Google+ script seems to kill firefox, so I use Chrome for logged-in Google activities.

But Chrome’s handling of certificates is abhorrent.  I’ve never seen anything so resolutely destructive to security and utility.  It is the most ill-considered, poorly implemented, counter-productive failure in UI design and security policy I’ve ever encountered.  It is hateful and obscene.  A disaster.  An abomination. The ill-conceived excrement of ignorant twits.  I’d be happy to share my unrestrained feelings privately.

It is a private network, you idiots

I’ve discussed the problem before, but the basic issues are that:

  • The certificate authority is NOT INVALID, Chrome just doesn’t recognize it because it is self-signed.  There is a difference, dimwits.
  • This is a private network (10.x.x.x or 192.168.x.x) and if you pulled your head out for a second and thought about it, white-listing private networks is obvious.  Why on earth would anyone pay the cert mafia for a private cert?  Every web-interfaced appliance in existence automatically generates a self-signed cert, and Chrome flags every one of them as a security risk INCORRECTLY.
  • A “valid” certificate merely means that one of the zillions of cert mafia organizations ripping people off by pretending to offer security has “verified” the “ownership” of a site before taking their money and issuing a certificate that placates browsers
  • Or a compromised certificate is being used.
  • Or a law enforcement certificate is being used.
  • Or the site has been hacked by criminals or some country’s law enforcement.
  • etc.

A “valid” certificate doesn’t mean nothing at all, but close to it.

So one might think it is harmless security theater, like a TSA checkpoint: it does no real harm and may have some deterrent value.  It is a necessary fiction to ensure people feel safe doing commerce on the internet.  If a few percent of people are reassured by firm warnings and are thus seduced into consummating their shopping carts, improving ad traffic quality and thus ensuring Google’s ad revenue continues to flow, ensuring their servers continue sucking up our data, what’s the harm?

The harm is that it makes it hard to secure a website.  SSL does two things: it pretends to verify that the website you connect to is the one you intended to connect to (but it does not do this) and it does actually serve to encrypt data between the browser and the server, making eavesdropping very difficult.  The latter useful function does not require verifying who owns the server, which can only be done with a web of trust model like perspectives or with centralized, authoritarian certificate management.

How to fix Chrome:

The damage is done. Millions of websites that could be encrypted are not because idiots writing browsers have made it very difficult for users to override inane, inaccurate, misleading browser warnings.  However, if you’re reading this, you can reduce the headache with a simple step (Thanks!):

Right click on the shortcut you use to launch Chrome and modify the launch command by adding the following “--ignore-certificate-errors

Unfuck chrome a bit.

Once you’ve done this, chrome will open with a warning:

zomg: ignore certificate errors?  who doesn't anyway?

YAY.  Suffer my ass.

Java?  What happened to Java?

Bonus rant

Java sucks so bad.  It is the second worst abomination loosed on the internet, yet lots of systems use it for useful features, or try to.  There’s endless compatibility problems with JVM versions and there’s the absolutely idiotic horror of the recent security requirement that disables setting “medium” security completely no matter how hard you want to override it, which means you can’t ever update past JVM 7.  Ever.  Because 8 is utterly useless because they broke it completely thinking they’d protect you from man in the middle attacks on your own LAN.

However, even if you have frozen with the last moderately usable version of Java, you’ll find that since Chrome 42 (yeah, the 42nd major release of chrome. That numbering scheme is another frustratingly stupid move, but anyway, get off my lawn) Java just doesn’t run in chrome.  WTF?

Turns out Google, happy enough to push their own crappy products like Google+, won’t support Oracle’s crappy product any more.  As of 42 Java is disabled by default.  Apparently, after 45 it won’t ever work again.  I’d be happy to see Java die, but I have a lot of infrastructure that requires Java for KVM connections, camera management, and other equipment that foolishly embraced that horrible standard.  Anyhow, you can fix it until 45 comes along…

To enable Java in Chrome for a little while longer, you can follow these instructions to enable NPAPI (which enables Java).  Type “chrome://flags/#enable-npapi” in the browser bar and click “enable.”

Enable NAPI

Posted at 13:24:37 UTC

Category: HowToSecuritytechnology

Speaker Build

Friday, November 28, 2014 

In December of 2002 (really, 2002, 12 years ago), I decided that the crappy former Sony self-amplified speakers with blown amplifiers that I had wired into my stereo as surround speakers really didn’t sound very good as they were, by then, 7 years old and the holes in the plastic housing where the adjustment knobs once protruded were covered by aging gaffers tape.

At least it was stylish black tape.

I saw on ebay a set of “Boston Acoustics” woofers and tweeters back in the time when ebay prices could be surprisingly good.  Boston Acoustics was a well-respected company at the time making fairly decent speakers.  36 woofers and 24 tweeters for $131 including shipping.  About 100 lbs of drivers.  And thus began the execution of a fun little project.


 

Design Phase: 2003-2011

I didn’t have enough data to design speaker enclosures around them, but about a year later (in 2003), I found this site, which had a process for calculating standard speaker properties with instruments I have (frequency generator, oscilloscope, etc.)  I used the weighted diaphragm method.

WOOFER: PN 304-1150001-00 22 JUL 2000
80MM CONE DIA = 8CM
FS  = 58HZ
RE  = 3.04 OHMS
QMS = 1.629
QES = 0.26
QTS = 0.224
CMS = 0.001222
VAS = 4.322 (LITERS) 264 CUBIC INCHES
EBP = 177.8

NOMINAL COIL RESISTANCE @ 385HZ (MID LINEAR BAND) 3.19 OHMS
NOMINAL COIL INDUCTANCE (@ 1KHZ) 0.448 MHENRY
TWEETER: PN 304-050001-00 16 OCT 2000
35MM CONE DIA
FS  = 269HZ
RE  = 3.29 OHMS
QMS = 5.66
QES = 1.838
QTS = 1.387
CMS = 0.0006
VAS = 0.0778 (LITERS)
EBP = 86.7

NOMINAL COIL RESISTANCE @ 930HZ (MID LINEAR BAND) 3.471 OHMS
NOMINAL COIL INDUCTANCE (@ 1KHZ) 0.153 MHENRY

Awesome.  I could specify a cross over and begin designing a cabinet.  A few years went by…

In January of 2009 I found a good crossover at AllElectronics.  It was a half decent match and since it was designed for 8 ohm woofers, I could put two of my 4 ohm drivers in series and get to about the right impedance for better power handling (less risk of clipping at higher volumes and lower distortion as the driver travel is cut in half, split between the two).

HTTP://WWW.ALLELECTRONICS.COM/MAKE-A-STORE/ITEM/XVR-21/2-WAY-CROSSOVER-INFINITY/1.HTML
CROSS OVER FREQUENCY 
3800HZ CROSSOVER
LOW-PASS: 18DB, 8 OHM
HIGH-PASS: 18DB, 4 OHM

Eventually I got around to calculating the enclosure parameters.  I’m not sure when I did that, but sometime between 2009 and 2011.  I found a site with a nice script for calculating a vented enclosure with dual woofers, just like I wanted and got the following parameters:

TARGET VOLUME 1.78 LITERS = 108 CUBIC INCHES

DRIVER VOLUME (80MM) = 26.25 CUBIC INCHES = 0.43 LITERS
CROSS OVER VOLUME = 2.93 CUBIC INCHES = 0.05 LITERS
SUM = 0.91 LITERS
1" PVC PORT TUBE: OD = 2.68CM, ID = 2.1CM = 3.46 CM^2
PORT LENGTH = 10.48CM = 4.126"


WIDTH = 12.613 = 4.829"
HEIGHT = 20.408 = 7.82"
DEPTH = 7.795 = 3"

In 2011 I got around to designing the enclosure in CAD:

There was no way to fit the crossover inside the enclosure as the drivers have massive, magnetically shielded drivers, so they got mounted on the outside.  The speakers were designed for inside mounting (as opposed to flange mounting) so I opted to radius the opening to provide some horn-loading.

I also, over the course of the project, bought some necessary tools to be prepared for eventually doing the work: a nice Hitachi plunge router and a set of cheap router bits to form the radii and hole saws of the right size for the drivers and PVC port tubes.

Build Phase (2014)

This fall, Oct 9 2014, everything was ready and the time was right.  The drivers had aged just the appropriate 14 years since manufacture and were in the peak of their flavor.

I started by cutting down some PVC tubes to make the speaker ports and converting some PVC caps into the tweeter enclosure.  My first experiment with recycled shelf wood for the tweeter mounting plate failed: the walls got a bit thin and it was clear that decent plywood would make life easier.  I used the shelf wood for the rest of the speaker: it was salvaged from my building, which was built in the 1930s and is probably almost 100 years old.  The plywood came with the building as well, but was from the woodworker who owned it before me.

I got to use my router after so many years of contemplation to shape the faceplates, fabricated from some fairly nice A-grade plywood I had lying around.

Once I got the boxes glued up, I installed the wiring and soldered the drivers in.  The wood parts were glued together with waterproof glue while the tweeters and plastic parts were installed with two component clear epoxy.  The low frequency drivers had screw mounting holes, so I used those in case I have to replace them, you know, from cranking the tunage.

I lightly sanded the wood to preserve the salvage wood character (actually no power sander and after 12 years, I wasn’t going to sand my way to clean wood by hand) then treated them with some polyurethane I found left behind by the woodworker that owned the building before I did.  So that was at least 18 years old.  At least.

I supported the speakers over the edge of the table to align the drivers in the holes from below.

The finished assembly looked more or less like I predicted:

Testing

The speakers sound objectively quite nice, but I was curious about the frequency response.  To test them I used the pink noise generator in Audacity to generate 5.1 6 channel pink noise files which I copied over to the HTPC to play back through my amp.  This introduces the amp’s frequency response, which is unlikely to be particularly good, and room characteristics, which are certainly not anechoic.

Then I recorded the results per speaker on a 24/96 Tascam DR-2d recorder, which also introduces some frequency response issues, and imported the audio files back into Audacity (and the original pink noise file), plotted the spectrum with 65536 poles, and exported the text files into excel for analysis.

Audacity’s pink noise looks like this:

Pink_Noise_Spectrum

It’s pretty good – a bit off plan below 10 Hz and the random noise gets a bit wider as the frequency increases, but it is pretty much what it should be.

First, I tested one of my vintage ADS L980 studio monitors.  I bought my L980s in high school in about 1984 and have used them ever since.  In college I blew a few drivers (you know, cranking tunage) but they were all replaced with OEM drivers at the Tweeter store (New England memories).  They haven’t been used very hard since, but the testing process uncovered damage to one of my tweeters, which I fixed before proceeding.

ADS L980 Spectrum

The ADS L980 has very solid response in the low frequency end with a nicely manufactured 12″ woofer and good high end with their fancy woven tweeter.  A 3 way speaker, there are inevitably some complexities to the frequency response.

I also tested my Klipsch KSC-C1 Center Channel speaker (purchased in 2002 on ebay for $44.10) to see what that looked like:

It isn’t too bad, but clearly weaker in the low frequency, despite moderate sized dual woofers and with a bit of a spike in the high frequency that maybe is designed in for TV and is perhaps just an artifact of the horn loaded tweeter. It is a two way design and so has a fairly smooth frequency response in the mid-range, which is good for the voice program that a center speaker mostly carries.

And how about those new ones?

New Speaker Spectrum

Well… not great, a little more variability than one would hope, and (of course) weak below about 100Hz.  I’m a little surprised the tweeters aren’t a little stronger over about 15kHz, though while that might have stood out to me in 1984, it doesn’t now.  Overall the response is quite good for relatively inexpensive drivers, the low frequency response, in particular, is far better than I expected given the small drivers.  The high frequency is a bit spiky, but quite acceptable sounding.

And they sound far, far better than the poor hacked apart Sony speakers they replaced.

Raw Data

The drawings I fabricated from and the raw data from my tests are in the files linked below:

Speaker Design Files (pdf)

Pink Noise Tests (xlsx)

Posted at 21:05:03 UTC

Category: AudioFabricationHowTophototechnology

Copying Text Without the Horrible Formatting

Saturday, August 16, 2014 

Have you ever copied some text off a web page or a document and then gone to paste it in another document or spreadsheet only to find some horribly formatted hypertext pasted in for some bizarre reason, then had to go through the hassle of trying to figure out how to remove the formatting?

Have you ever used Putty or another SSH client that automatically copies highlighted text to the copy buffer and allows pasting with a middle click and wished all programs were this smart?

Has anyone, ever, in the history of using a computer, WANTED to paste formatted text from a web page or drop some idiotic OLE object into their FrameMaker document?  I know I’ve never once wanted that to happen.

Tonight I had to copy a 100 or so mac addresses out of a DHCP list from the web interface of pfSense into an Excel table and each damn time I got stupid formatting and then had to select the cell, select the drop down menu for paste options, select paste as text, repeat.  Holy crap, what the hell were they thinking?  No clue.

None of the paste solutions recommended for Excel worked for me and OpenOffice/Libre were just as screwed up.  But I found some solutions for the copy side for Windows.  Some of the plugins should work on Linux.  If you’re using a Mac, The Steve has already decided how your work is permitted to look and the Apple goons will probably break your fingers if you try to modify formatting.

  • Auto Copy makes Chrome on windowz almost as efficient as a linux application! Copy as text, select to copy. Middle click to paste.  Dang. But it doesn’t seem to always remove formatting (select to copy works reliably though).
  • Copy as Plain text fixes this stupidity on Firefox.
  • UPDATE: Márton Anka is an awesome developer who writes some of the best code on the internet and his plugin PLAINCOPY, is an excellent solution.
  • Autocopy2 adds the incredibly useful select to copy to Firefox.  Once you get used to it, you’ll be frustrated with applications that don’t support it.
  • This edit to maker.ini will prefer pasting plain text (or now UTF8) over OLE2, eliminating that horror from FrameMaker.

It turns out there’s a universal solution for Windows.

  • PureText removes formatting from text on the clipboard and pastes it with an alternate key command (like Windows-V), so even copying from word documents to excel isn’t a horrible nightmare of tedium.

I haven’t yet figured out how to copy images from Firefox to Thunderbird without pasting it as a reference to the original image.  Pasting an HTML reference to remote content means the recipient either doesn’t see the image (because they don’t auto-load remote content or because they don’t have permission to load it or aren’t on-line when they read their mail) or Thunderbird makes a request to the referenced site to load the media creating a privacy violating log entry.  The most convenient solution I’ve found is to paste the image into irfanview first and then copy from there into Thunderbird.

Posted at 15:40:40 UTC

Category: HowTotechnology