HowTo

CoreELEC/Kodi on a Cheap S905x Box

Friday, September 11, 2020 

I got a cheap Android TV box off Amazon a while back and consolidated and it ended up being spare.  One of the problems with these TV boxes is they don’t get OS updates after release and soon video apps stop working because of annoying DRM mods.  I figured I’d try to switch one to Linux and see how it went.  There are some complications in that these are ARM devices (not x86/x64) and getting ADB to connect is slightly complicated due to it not being a USB device.  There are a few variations of the “ELEC” (Embedded Linux Entertainment Center), OpenELEC, LibreELEC, and CoreELEC (at least).  CoreELEC focuses on ARM boxes and makes sense for the cheapest boxes.

What you get is a little linux-based device that boots to Kodi, runs an SMB server, and provides SSH access, basically an open source SmartTV that doesn’t spy on you.  It is well suited to a media library you own, rather than rent, such as your DVD/Blu-Ray collection.  Perhaps physical media will come back into vogue as the rent-to-watch streaming world fractures into a million individual subscriptions.  Once moved to the FOSS world, there are regular updates and, generally, people committed to keeping things working and far better and far longer term support than OEM’s offer for random cheap android devices (or, even quite expensive ones).

A: Download the CoreELEC image:

You need to know the processor and RAM of your device – note that a lot of different brands are the same physical boxes.  My “CoCoq M9C Pro” (not listed) is also sold as a “Bqeel M9C Pro” (listed) and the SOC (S905X) and RAM (1G) match, so the Bqeel image worked fine.  The download helper at the bottom of the https://coreelec.org/ site got me the right file easily.

B: Image a uSD or USB device for your wee box:

I’m running Linux as my desktop so I used the LibreELEC Creator Tool and just selected the file downloaded previously.  That part went easily.  The only issue was that after imaging, Linux didn’t recognize the device and I had to systemctl --user restart gvfs-udisks2-volume-monitor before I could do the next step.

C: One manual file move for bootability:

ELEC systems use a “device tree” – a file with hardware descriptors that are device specific to get (most) of the random cheap peripherals working on these cheap boxes.  You have to find the right one in the /Device Trees folder on your newly formatted uSD/USB device and copy it to the root directory and rename it dtb.img.

D: Awesome, now what?

Once you have your configured formatted bootable uSD/USB you have to get your box to boot off it.  This is a bit harder without power/volume control buttons normally on Android devices.  I have ADB set up to strip bloatware from phones.  You just need one command to get this to work, reboot update, but to get it to your box is a bit of a trick.

Most of these boxes are supplied rooted, so you don’t need to do that, but you do need a command line tool and to enable ADB.

To enable ADB: navigate to the build number (usually Settings > System > About phone > Build number) and click it a bunch of times until it says “you’re a developer!”  Then go to developer options and enable ADB.  You can’t use it yet though because there’s no USB.

Install a Terminal Emulator for Android and enable ADB over TCP

su
setprop service.adb.tcp.port 5555
stop adbd
start adbd

Then check your IP with ifconfig and on your desktop computer run

adb connect dev.ip.add.ress:5555

(dev.ip.add.ress is the IP address of your device – like 192.168.100.46, but yours will be different)

Then execute adb reboot update from the desktop computer and the box should reboot then come up in CoreELEC.

Going back to android is as easy as disconnecting the boot media and rebooting, the android OS is still on the device.

Issue: WIFI support.

Many of these cheap devices use the cheapest WIFI chips they can and often the MFGs won’t share details with FOSS developers, so driver support is weak.  Bummer, but the boxes have wired connections and wired works infinitely better anyway: it isn’t a phone, it’s attached to a TV that’s not moving around, run the damn wire and get a stable, reliable connection.  If that’s not possible check the WIFI chips before buying or get a decent USB-WIFI adapter that is supported.

 

Posted at 16:46:48 GMT-0700

Category: HowToLinuxtechnology

WebP and SVG

Tuesday, September 1, 2020 

Using WebP coded images inside SVG containers works.  I haven’t found any automatic way to do it, but it is easy enough manually and results in very efficiently coded images that work well on the internets.  The manual process is to Base64 encode the WebP image and then open the .svg file in a text editor and replace the

xlink:href="data:image/png;base64, ..."

with

xlink:href="data:image/webp;base64,..."

(“…” means the appropriate data, obviously).


Back in about 2010 Google released the spec for WebP, an image compression format that provides a roughly 2-4x coding efficiency over the venerable JPEG (vintage 1974), derived from the VP8 CODEC they bought from ON2. VP8 is a contemporary of and technical equivalent to H.264 and was developed during a rush of innovation to replace the aging MPEG-II standard that included Theora and Dirac. Both VP8 and H.264 CODECs are encumbered by patents, but Google granted an irrevocable license to all patents, making it “open,” while H.264s patents compel licensing from MPEG-LA.  One would think this would tend to make VP8 (and the WEBM container) a global standard, but Apple refused to give Google the win and there’s still no native support in Apple products.

A small aside on video and still coding techniques.

All modern “lossy” (throwing some data away like .mp3, as opposed to “lossless” meaning the original can be reconstructed exactly, as in .flac) CODECs are founded on either Discrete Cosine Transform (DCT) or Wavelet (DWT) encoding of “blocks” of image data.  There are far more detailed write ups online that explain the process in detail, but the basic concept is to divide an image into small tiles of data then apply a mathematical function that converts that data into a form which sorts the information from least human-perceptible to most human-perceptible and sets some threshold for throwing away the least important data while leaving the bits that are most important to human perception.

Wavelets are promising, but never really took off, as in JPEG2000 and Dirac (which was developed by the BBC).  It is a fairly safe bet that any video or still image you see is DCT coded thanks to Nasir Ahmed, T. Natarajan and K. R. Rao.  The differences between 1993’s MPEG-1 and 2013’s H.265 are largely around how the data that is perceptually important is encoded in each still (intra-frame coding) and some very important innovations in inter-frame coding that aren’t relevant to still images.

It is the application of these clever intra-frame perceptual data compression techniques that is most relevant to the coding efficiency difference between JPEG and WebP.

Back to the good stuff…

Way back in 2010 Google experimented with the VP8 intra-coding techniques to develop WebP, a still image CODEC that had to have two core features:

  • better coding efficiency than JPEG,
  • ability to handle transparency like .png or .tiff.

This could be the one standard image coding technique to rule them all – from icons to gigapixel images, all the necessary features and many times better coding efficiency than the rest.  Who wouldn’t love that?

Apple.

Of course it was Apple.  Can’t let Google have the win.  But, finally, with Safari 14 (June 22, 2020 – a decade late!) iOS users can finally see WebP images and websites don’t need crazy auto-detect 1974 tech substitution tricks.  Good job Apple!

It may not be a coincidence that Apple has just released their own still image format based on the intra-frame coding magic of H.265, .heif and maybe they thought it might be a good idea to suddenly pretend to be an open player rather than a walled-garden-screw-you lest iOS insta-users wall themselves off from the 90% of the world that isn’t willing to pay double to pose with a fashionable icon in their hands.  Not surprisingly, .heic, based on H.265 developments is meaningfully more efficient than WebP based on VP8/H.264 era techniques, but as it took WebP 10 years to become a usable web standard, I wouldn’t count on .heic  having universal support soon.  I expect Google is hard at work on a VP9-based version of WebP right now.  Technology is fast, marketing is slow.  Oh well.  In the mean time….

SVG support in browsers is a much older thing – Apple embraced it early (SVG was not developed by Google so….) and basically everything but IE has full support (IE…  the tool you use to download a real browser).  So if we have SVG and WebP, why not both together?

Oddly I can’t find support for this in any of the tools I use, but as noted at the open, it is pretty easy.  The workflow I use is to:

  • generate a graphic in GIMP or Photoshop or whatever and save as .png or .jpg as appropriate to the image content with little compression (high image quality)
  • Combine that with graphics in Inkscape.
  • If the graphics include type, convert the type to SVG paths to avoid font availability problems or having to download a font file before rendering the text or having it render randomly.
  • Save the file (as .svg, the native format of Inkscape)
  • Convert the image file to WebP with a reasonable tool like Nomacs or Ifranview.
  • Base64 encode the image file, either with base64 # base64 infile.webp > outfile.webp.b64 or with this convenient site
  • If you use the command line option the prefix to the data is “data:image/webp;base64,
  • Replace the … on the appropriate xlink:href="...." with the new data using a text editor like Atom.
  • Drop the file on a browser page to see if it works.

WordPress blocks .svg uploads without a plugin, so you need one

The picture is 101.9kB and tolerates zoom quite well. (give it a try, click and zoom on the image).

 

Posted at 08:54:16 GMT-0700

Category: HowToLinuxphotoself-publishingtechnology

Dealing with Apple Branded HEIF .HEIC files on Linux

Saturday, August 22, 2020 

Some of the coding tricks in H.265 have been incorporated into MPEG-H coding, an ISO standard introduced in 2017, which yields a roughly 2:1 coding efficiency gain over the venerable JPEG, which was introduced in 1992.  Remember that?  I do; I’m old.  I remember having a hardware NUBUS JPEG decoder card.   One of the reasons JPEG has lasted so long is that images have become a small storage burden (compared to 4k video, say) and that changing format standards is extremely annoying to everyone.

Apple has elected to make every rational person’s life difficult and put a little barbed wire around their high-fashion walled garden and do something a little special with their brand of a HEVC (h.265) profile for images.  Now normally seeing iOS user’s insta images of how fashionable they are isn’t really worth the effort, but now and then a useful correspondent joins the cult and forks over a ton of money to show off a logo and starts sending you stuff in their special proprietary format.  Annoying, but fixable.

Assuming you’re using an OS that is neither primarily spyware nor fashion forward, such as Linux Mint, you can install HEIF decode (including Apple Brand HEIC) with a few simple commands:

$ sudo add-apt-repository ppa:jakar/qt-heif
$ sudo apt update
$ sudo apt install qt-heif-image-plugin

Once installed, various image viewers should be able to decode the images.  I rather like nomacs as a fairly tolerable replacement for Irfan Skiljan‘s still awesome irfanview.

Posted at 03:56:36 GMT-0700

Category: HowToLinuxphotoPositivereviewstechnology

Integrate Fail2Ban with pfSense

Monday, July 13, 2020 

Fail2Ban is a very nice little log monitoring tool that is used to detect cracking attempts on servers and to extract the malicious IPs and do the things to them – usually temporarily adding the IP address of the source of badness to the server’s firewall “drop” list so that IP’s bad packets are lost in the aether.   This is great, but it’d be cool to, instead of running a firewall on every server each locally detecting and blocking malicious actors, to instead detect across all services and servers on the LAN and push the results up to a central firewall so the bad IPs can’t reach the network at all.

I like pfSense as a firewall and run FreeBSD on my servers; I couldn’t find a prebuilt tool to integrate F2B with pfSense, but it wasn’t hard to hack something together so it worked. Basically I have F2B maintain a local “block list” of bad IPs as a simple text file which is published via Apache from where pfSense’s grabs it and applies it as a LAN-wide IP filter.  I use the pfSense package pfBlockerNG to set up the tables but in the end a custom script running on the pfSense server actually grabs the file and updates the pfSense block lists from it on a 1 minute cron job.

There are plenty of well-written guides for getting F2B working and how to configure it for jails; I found the following useful:

The custom bits I did to get it to work are:

Custom F2B Action

On the protected side, I modified the “dummy.conf” script to maintain a list of bad IPs in an Apache served location that pfSense could reach.  F2B manages that list, putting bad IPs in “jail” and letting them out as in any normal F2B installation – but instead of being the local server’s packet filter, it is a web-published text list.

# Fail2Ban configuration file
#
# Author: David Gessel
# Based on: dummy.conf by Cyril Jaquier
#

[Definition]

# Option:  actionstart
# Notes.:  command executed on demand at the first ban (or at the start of Fail2Ban if actionstart_on_demand is set to false).
# Values:  CMD
#


actionstart = if [ -z '' ]; then 
                  touch 
                  printf %%b "# \n" 
                  fi 
              chmod 755 
              echo "%(debug)s started"

# Option:  actionflush
# Notes.:  command executed once to flush (clear) all IPS, by shutdown (resp. by stop of the jail or this action)
# Values:  CMD
#

actionflush = if [ ! -z '' ]; then
                  rm -f 
                  touch 
                  printf %%b "# \n" 
                  fi
              chmod 755 
              echo "%(debug)s clear all"

# Option:  actionstop
# Notes.:  command executed at the stop of jail (or at the end of Fail2Ban)
# Values:  CMD
#
actionstop = if [ ! -z '' ]; then
                  rm -f 
                  touch 
                  printf %%b "# \n" 
                  fi
             chmod 755 
             echo "%(debug)s stopped"

# Option:  actioncheck
# Notes.:  command executed once before each actionban command
# Values:  CMD
#
actioncheck =

# Option:  actionban
# Notes.:  command executed when banning an IP. Take care that the
#          command is executed with Fail2Ban user rights.
# Tags:    See jail.conf(5) man page
# Values:  CMD
#

actionban = printf %%b "\n" 
            sed -i '' '/^$/d' 
            sort -u  -o 
            chmod 755 
            echo "%(debug)s banned  (family: )"

# Option:  actionunban
# Notes.:  command executed when unbanning an IP. Take care that the
#          command is executed with Fail2Ban user rights.
# Tags:    See jail.conf(5) man page
# Values:  CMD
#

# flush the IP using grep which is supposed to be about 15x faster than sed  
# grep -v "pattern" filename > filename2; mv filename2 filename


actionunban = grep -v ""  > 
              mv  
              chmod 755 
              echo "%(debug)s unbanned  (family: )"


debug = []   --

[Init]

init = BRT-DNSBL

target = /usr/jails/claudel/usr/local/www/data-dist/brt/dnsbl/brtdnsbl.txt
temp = .tmp
to_target = >> 

Once this list is working, then move to the pfSense side.

Set up pfBlockerNG

The basic setup is well described, for example in https://protectli.com/kb/how-to-setup-pfblockerng/ and it provides a lot of useful blocking options, particularly with externally maintained lists of internationally recognized bad actors.  There are two basic functions, related but different:

DNSBL

Domain Name Service Block Lists are lists of domains associated with unwanted activity and blocking them at the DNS server level (via Unbound) makes it hard for application level services to reach them.  A great use of DNSBLs is to block all of Microsoft’s telemetry sites, which makes it much harder for Microsoft to steal all your files and data (which they do by default on every “free” Windows 10 install, including actually copying your personal files to their servers without telling you!  Seriously.  That’s pretty much the definition of spyware.)

It also works for non-corporate-sponsored spyware, for example lists of command and control servers found for botnets or ransomware servers.  This can help prevent such attacks by denying trojans and viruses access to their instruction servers.  It can also easily help identify infected computers on the LAN as any blocked requests are logged (to 1.1.1.1 at the moment, which is an unfortunate choice given that is now a well-reputed DNS server like Google’s 8.8.8.8 but, it seems, without all the corporate spying.)  There is a bit of irony in blocking lists of telemetry gathering IPs that are built using telemetry.

Basically DNSBLs prevent services on the LAN from reaching nasty destinations on the internet by returning any DNS request to convert a malicious domain name to a dead-end IP address.  When your windows machine wants to report your web browsing habits to microsoft, it instead gets a “page not found” error.

IPBL

This concept uses an IPBL, a list of IP addresses to block.  An IPBL works at a lower level than a DNSBL and typically is set up to block traffic in both directions – a script kiddie trying to brute force a password can be blocked from reach the services on the LAN, but so too can the reverse direction be blocked – if a malicious entity trips F2B, not only are they stopped, so too are any sneaky services on your LAN blocked from reaching them on the internet.

All we need to do is get the block list F2B is maintaining into pfSense.  pfBlockerNG can subscribe to the list easily enough, but the minimum update time is an hour – which is an awfully long time to let someone try to guess passwords or flood your servers with 404 requests or whatever else you’re using F2B to detect and stop.  So I wrote a simple script that executes a few simple commands to grab the IP list F2B maintains, clean it, and use it to update the packet filter drop lists:

/root/custom/brtblock.sh

#!/usr/bin/env sh
# set -x # uncomment for "debug"

# Get latest block list
/usr/local/bin/curl -m 15 -s https://server.ip/brtdnsbl.txt > /var/db/pfblockerng/original/BRTDNSBL.orig
# filter for at least semi-valid IPs.
/usr/bin/grep  -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' /var/db/pfblockerng/original/BRTDNSBL.orig > /var/db/pfblockerng/native/BRTDNSBL.txt
# update pf tables
/sbin/pfctl -t pfB_BRTblock -T replace -f /var/db/pfblockerng/native/BRTDNSBL.txt > /dev/null 2>&1

HT to Jared Davenport for helping to debug the weird /env issues that arise when trying to call these commands directly from cron.

Preventing Self-Lockouts

One of the behaviors of pfBlockerNG that the dev seems to think is a feature is automatic filter order management.  This overrides manually sorted filter orders and puts pfB’s block filters ahead of all other filters, including, say, allow filters of your own IPs that you don’t want to ever be locked out in case you forget your passwords and accidentally trigger F2B on yourself.  To fix this, you have to use a non-default setting and make all IP block list “action” types “Alias_Native.”

pfBlockerNG Native IP Block Lists

Then you write your own per-alias filter (typically “drop” or “reject”) and pfBlockerNG won’t auto-order them for you on update.

pfSense Filter Order

Cron Plugin

The last ingredient is to update the list on pfSense quickly.  pfSense is designed to be pretty easy to maintain, so it overwrites most of the file structure on update, making command line modifications frustratingly transient.  I understand that /root isn’t flushed on an update so the above script should persist inside the /root directory.  But crontab -e just doesn’t stick around.  To have cron modifications persist, install the “Cron” package with the pfSense package manager.  Then just set up a cron job to run the script above to keep the block list updated.  “*/1” means run the script once a minute.

pfSense Cron Config

Results

The system seems to be working well enough – the list of miscreants as small, but effectively targeted, 11,840 packets dropped from an average of about 8-10 bad IPs at any given time.

pfBlockerNG current status

Posted at 05:48:43 GMT-0700

Category: FreeBSDHowToSecuritytechnology

Save your email! Avoid the Thunderbird 68 update

Thursday, November 28, 2019 

TLDR:

If you’ve customized TB with plugins you care about, DO NOT UPDATE to 68 until you verify that every plugin you use is compatible.  TB will NOT check for you and once you launch 68, the plugins that have been updated to 68 compatibility will not work with 60.x, which means you better have a backup of your .thunderbird profile folder or you’re going to be filled with seething rage and you’ll have to undo the update. This misery is the consequence of Mozilla having failed to fully uphold their obligation to the user and developer communities that rely on and have enhanced the tools they control.

BTW: if you’re using Firefox and miss the plugins that made it more than just a crappy clone of Chrome, Waterfox is great and actually respects users and community  developers.  Give it a try.

Avoid Thunderbird 68 Hell

To avoid this problem now and in the future, you have to disable automatic updates.  In Thunderbird: Edit->Preferences->Advanced->General-[Config Editor…]->app.update.auto=False, app.update.enabled=False.

Screenshot of no update prefs thunderbird

On Linux, you should also disable OS Updates using Synaptic: select installed thunderbird 60.x and then from the menu bar Package->Lock Version.

screenshot of locking package in synaptic

If you’ve been surprise updated to the catastrophically incompatible developer vanity project and massive middle finger to the plugin developer community which is 68 (and 60 to a lesser extent), then you have to revert.  This sucks as 60.x isn’t in the repos.

Undo Thunderbird 68 Hell

First, do not run 68.  Ever.  Don’t.  It will cause absolute chaos with your plugins.  First it showed most incompatible, then updated some, then showed others compatible, but had deleted the .xpi files so they weren’t in the .thunderbird folder any more, despite being listed and shown incorrectly as compatible.  This broke some things I could live without like Extra Format Buttons, but others I really needed like Dorando Keyconfig and Sieve.  Mozilla’s attitude appears to be “if you’re using software differently than we think you should, you’re doing it wrong.”

The first step before breaking things even more is to backup your .thunderbird directory.  You can find the location from Help->Troubleshooting Information->Application Basics->Profile Directory.  Just click [Open Directory].  Make a backup copy of this directory before doing anything else if you don’t already have one, in linux a command might be:

tar -jcvf thunderbird_profile_backup.tar.bz2 .thunderbird

If you’re running Windows, old installers of TB are available here.

In Linux, using a terminal, see what versions are available in your distro:

apt-cache show thunderbird

I see only 1:68.2.1+build1-0ubuntu0.18.04.1 and 1:52.7.0+build1-0ubuntu1. Oh well, neither is what I want. While in the terminal uninstall Thunderbird 68

sudo apt-get remove thunderbird

As my distro, Mint 19.2, only has 68.x and 52.x in the apt cache, I searched to find a .deb file of a recent version.  I couldn’t find the last plugin compatible version, 60.9.0 as an easy to install .deb (though it is available for manual install from Ubuntu) so I am running 60.8.0, which works.  One could download the executable file of 60.9.1 .and put it somewhere (/opt, say) and then update start scripts to execute that location.

I found the .deb file of 60.8.0 at this helpful historical repository of Mozilla installers.  Generally the GUI will auto-install on execution of the download.  But don’t launch it until you restore your pre-68 .thunderbird profile directory or it will autocreate profile files that are a huge annoyance.  If you don’t have a pre-68 profile, you will probably have to hunt down pre-68 compatible versions of all of your plugins, though I didn’t note any catastrophic profile incompatibilities (YMMV).

Good luck. Mozilla just stole a day of your life.

Read more…

Posted at 07:51:32 GMT-0700

Category: HowTotechnology

Frequency of occurrence analysis in LibreOffice

Tuesday, November 19, 2019 

One fairly common analytic technique is finding out, for example, the rate at which something appears in a time referenced file, for example a log file.

Lets say you’re looking for the reate of some reported failure to determine, say, whether a modification or update had made it better or worse.  There are log analysis tools to do this (like Splunk), but one way to do it is with a spreadsheet.

Assuming you have a table with time in a column (say A) and some event text in another (say B) like:

11-19 16:51:03 a bad error happened
11-19 16:51:01 something minor happened
11-19 12:51:01 cat ran by

you might convert that event text to a numerical value (into, say, column C) for example by:

=IF(ISERROR(FIND("error",B1)),"",1)

FIND returns true if the text (“error”) is found, but returns an error if not. ISERROR inverts that and returns logical values for both. The IF ISERROR construction allows one to specify values if the text is found or not – a bit complex but the result in this case will be “” (blank) if “error” isn’t found in B1 and 1 if “error” is found.

Great, fill down and you have a new column C with blank or 1 depending if “error” was found in column B. Summing column C yields the total number of lines in which the substring “error” occured.

But now we might want to make a histogram with a sum count of the occurrences within a specific time period, say “errors/hour”.

Create a new column, say column D, and fill the first two rows with date/time values separated by the sampling period (say an hour), for example:

11/19/2019 17:00:00
11/19/2019 16:00:00

And fill down; there’s a quirk where LibreOffice occasionally loses one second, which looks bad. It probably won’t meaningfully change the results, but just edit the first error, then continue filling down.

To sum within the sample period use COUNTIFS in (say) column E to count the occurrences in entire columns that meet a string of criterion: in this case three criterion have to be met: the value of C is 1 (not “”), The value of A (time) is before the start of the sampling period (D1) and after the end (D2). That is:

=COUNTIFS(C1:C500,"1",$A1:$A500,">="&$D2,$A1:$A500,"<"&$D1)

Filling this formula down populates the sampling periods with the count per sampling period, effectively the occurrence rate per period, in our example errors/hour.

Posted at 08:36:41 GMT-0700

Category: HowTotechnology

Let’s Encrypt….

Sunday, December 10, 2017 

Let’s encrypt, why not?

Wanna know how I did it for FreeBSD/Apache/acme-client, jump below.

Let’s encrypt is a service from the fine people at Mozilla, who, when they’re not trying to prove that Firefox can be a Chrome clone, do some really good stuff. Certificates are what give you the little warm fuzzy feeling of a green lock icon, and when properly configured, avoid giving you that terrifying feeling that something horrible is about to happen if you visit a site with an expired one or a self-signed one.

There are some huge structural problems in the certificate concept that seem to exist only to validate the certificate mafia, that can charge $100s per year for a validated certificate, as if executing the script to issue one was somehow expensive. It is not, you can generate one yourself that provides exactly the same security as one provided by a big company that gets their root certs distributed in a browser, but browsers reject these with scary messages so webmasters have to keep buying them.

Now there’s a theory behind why they’re ripping you off: the premise is that the certificate verifies the site is who it says it is – that if you go to mybank.com, you’re actually visiting your real bank, not being redirected by a man-in-the-middle attack to some fake landing page to harvest your passwords, log into your account, and steal all your cats.  There are a few problems with this:

  • Nobody actually checks a URL so while a certificate sort of adds some weight to the probability that mybank.com is owned by mybank, not some hacker a few tables over ARP poisoning the cafe wifi, it doesn’t do anything if you click on a link to mibank.com.
  • The companies that claim to check IDs and verify owners, do not.  That would cost money. You think they’re gonna actually do that?  No… (CAcert actually does, but they don’t get a root cert because… they do it for free. And don’t have Mozilla’s money and clout.)
  • Stealing a root cert private key can generate significant LOLZ; it happens a lot.
  • Law enforcement the world over has “lawful intercept” certs.  You’re probably on some country’s poop list if you have ever used social media. Their laws permit intercepting your communications.  Some country’s laws somewhere certainly do no matter who you are.
  • But dang, those annoying warnings that do nothing to secure you mean that people who publish a website just for the good of the planet either have to pay up, go through a lot of hassle, or leave their user’s content streams exposed to the world’s prying eyes…

…Until Let’s Encrypt came along.  It is a lovely little set of tools and services that not only issue browser-accepted certs (see the green lock?) but also automate renewal.  They basically check that you have enough control over your website to let a script write a file that that they can read back and verify, and if so, you’re who you say you are: the person with write access to the server powering the website they’re giving the certificate too.  That’s all anyone can really do, and is as secure as any other cert there is for identification of a site: that is except for stolen certs, url typos, law enforcement certs, or malicious code on your computer, if you visit https://blackrosetech.com and you don’t get any warnings, you’re probably reading data coming off my computer and not some hacker pretending to be me.

I got Let’s Encrypt to work, but it took some modifications of the existing guides, and I think the service is a good thing that more people should use, so in the spirit of investing some of my resources into the great shared experiment that is Open Source, here’s my How To:


Upstream Guides:

I found these two guides extremely helpful.

https://www.richardfassett.com/2017/01/16/using-lets-encrypt-with-acme-client-on-a-freebsd-11apache-2-4/

https://brnrd.eu/security/2016-12-30/acme-client.html

Step 1: Installing the certificate generation tool

There are a few different software tools to manage the Let’sEncrypt process.  I elected to use Kristaps Dzonsons acme-client, ported to FreeBSD by Bernard Spil.

I was using OpenSSL on my site.  Bernard and Kristaps have some strong opinions on OpenSSL and heartbleed and a few other problems and therefore require LibreSSL.   If you’re using it already, great.  If not, you’ll have to install it.  It wasn’t too terrible, but I ran into a few issues:

https://wiki.freebsd.org/LibreSSL
Or, easy peasy https://ootput.github.io/2016/07/20/Switching-to-LibreSSL/

# ee /etc/make.conf
set  DEFAULT_VERSIONS+= ssl=libressl
# portmaster -od security/libressl security/openssl
# portmaster -rd security/libressl

if that fails with

===>>> The argument to -r must be a package name, or a glob pattern

Then try:

# pkg version -v | grep libre
libressl-2.6.3 = up-to-date with index
# portmaster -rd libressl-2.6.3
or for a complete refresh
# portmaster -Rafd

Curl will probably fail with LibreSSL (and with the latest, if it has brotli support enabled).  Check the google to see if these fixes are still needed, or just:

# cd /usr/ports/ftp/curl
# make config

disable TLS-SRP  https://forums.freebsd.org/threads/56917/

ftp/curl 7.75.0 has an issue with pied piper brotli, which requires modifying the makefile to build --without-brotli as indicated in comment #2 

(Sunpoet, the curl port maintainer, got back to me with an update: when PR/223966 is integrated in Brotli, he will add an optional Brotli support flag and it should work fine at that point without the Makefile edit.)

Step 2: Actually installing acme-client

The really easy part: you should be able to

# portmaster security/acme-client

and be on your way to configuration heaven.

Step 3: Initial configuration

The defaults for acme-client expect certain directories to exist and the installer doesn’t create them.

# mkdir -pm750 /usr/local/www/.well-known && chown -R www:www /usr/local/www/.well-known
# mkdir -pm750 /usr/local/www/.well-known/acme-challenge && chown -R www:www /usr/local/www/.well-known/acme-challenge

The how-to’s seemed to forget the last one.

And make a modification to your httpd.conf file to permit the Let’s Encrypt servers to have access to these folders:

# ee /usr/local/etc/apache24/httpd.conf

add the following:

# Lets Encrypt challenge directory configured per 
# https://brnrd.eu/security/2016-12-30/acme-client.html
<Directory "/usr/local/www/.well-known/">
        Options None
        AllowOverride None
        Require all granted
        Header add Content-Type text/plain
</Directory>

And, for each VHOST that is going to get a cert:

# ee /usr/local/etc/apache24/extra/httpd-vhosts.conf

add to each non-ssl VHOST definition the following:

Alias /.well-known/ /usr/local/www/.well-known/

such that you end up with something like (yours may be different, especially watch out for BasicAuth or ModRewrite, addressed further down):

<VirtualHost IP.NU.MB.ER:80>
    ServerName domain.com
    ServerAdmin admin@domain.com
    DocumentRoot /usr/local/www/data-dist/domain-root
    ServerAlias *.domain.com www.domain.com
    Alias /.well-known/ /usr/local/www/.well-known/
    ErrorLog /var/log/domain-error_log
    CustomLog /var/log/domain-access_log combined
    ScriptAlias /cgi-prg /www/cgi-prg
</VirtualHost>

Don’t forget!

# apachectl restart

Step 4: First Try

At this point the system should be configured sufficiently to do a trial run with a single domain from the command line. Later on there are some scripts that will automate the process of both converting a large number of VHOSTed domains on a server to Let’s Encrypt and for maintaining them and getting email notifications if anything goes wrong in the, hopefully, fully automatic renewal process.

# acme-client -mvnNC /usr/local/www/.well-known/acme-challenge domain.com www.domain.com

This should create all the directories still needed and populate them, then check with the lets encrypt server and get a certificate and install it in the right place.  Inshalla.

If you get something like

acme-client: transfer buffer: [{ "type": "urn:acme:error:malformed","detail": "Provided agreement URL[https://letsencrypt.org/documents/LE-SA-v1.1.1-August-1-2016.pdf]does not match current agreement URL[https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf]","status": 400 }] (267 bytes)

That means the lets encrypt agreement has changed.  You can’t do much but write the port maintainer or wait for an update.  It will get fixed quickly and should only happen once a year.  I don’t think you’ll get it at all unless you’re unlucky enough to try to update when it is changing.  I was.

More likely you’ll get something like

acme-client: transfer buffer: [{ "type": "http-01", "status": "invalid", "error": { "type": "urn:acme:error:unauthorized", "detail": "Invalid response from http://www.domain.com/.well-known/acme-challenge/evReZz6s1uSVZbgEVdKkWElx_NHb3NmbbwGbADUwRtQ: (etc...)

This means there’s a problem accessing the /.well-known/ directory by the server.  There can be a lot of reasons for this:

  • You didn’t restart apache # apachectl restart
  • There was an error in the config file (look at the output of the restart) and therefore apache didn’t actually reaload with your new config.
  • DNS isn’t pointing where you think it is pointing.  Check with nslookup/whois to make sure.  Really.
  • You have the directories protected in some way – like with .htaccess.  (see below)

But if it goes well, you’ll get something like:

acme-client: /usr/local/etc/acme/domain.com/privkey.pem: account key exists (not creating)
acme-client: /usr/local/etc/ssl/acme/private/domain.com/privkey.pem: domain key exists (not creating)
acme-client: https://acme-v01.api.letsencrypt.org/directory: directories
acme-client: acme-v01.api.letsencrypt.org: DNS: 173.223.13.221
acme-client: acme-v01.api.letsencrypt.org: DNS: 2001:418:142b:290::3d5
acme-client: acme-v01.api.letsencrypt.org: DNS: 2001:418:142b:28d::3d5
acme-client: https://acme-v01.api.letsencrypt.org/acme/new-authz: req-auth: domain.com
acme-client: /usr/local/www/.well-known/acme-challenge/_ffVe6jHNHbIG1XKAeoqQmmtryWMGCKsfHIWWkl5lJw: created
acme-client: https://acme-v01.api.letsencrypt.org/acme/challenge/5HKzgB9diS5ecS6WbYJsHeEXsSWZeMhdYFmMfN9voHA/2673529867: challenge
acme-client: https://acme-v01.api.letsencrypt.org/acme/challenge/5HKzgB9diS5ecS6WbYJsHeEXsSWZeMhdYFmMfN9voHA/2673529867: status
acme-client: https://acme-v01.api.letsencrypt.org/acme/new-cert: certificate
acme-client: http://cert.int-x3.letsencrypt.org/: full chain
acme-client: cert.int-x3.letsencrypt.org: DNS: 184.23.159.176
acme-client: cert.int-x3.letsencrypt.org: DNS: 184.23.159.177
acme-client: cert.int-x3.letsencrypt.org: DNS: 2001:5a8:100::b817:9fb0
acme-client: cert.int-x3.letsencrypt.org: DNS: 2001:5a8:100::b817:9fb1
acme-client: /usr/local/etc/ssl/acme/domain.com/chain.pem: created
acme-client: /usr/local/etc/ssl/acme/domain.com/cert.pem: created
acme-client: /usr/local/etc/ssl/acme/domain.com/fullchain.pem: created

Yay, you’ve got certs!  Now update your vhosts file to point to the certs you just created.  You may need to add a 443 container or, if it exists, update it to point to the new certs and restart apache.

# ee /usr/local/etc/apache24/extra/httpd-vhosts.conf

<VirtualHost IP.NU.MB.ER:443>
      ServerName domain.com
      ServerAdmin admin@domain.com
      DocumentRoot /usr/local/www/domainroot
      ServerAlias domain.com sub.domain.com
      SSLCertificateFile /usr/local/etc/ssl/acme/domain.com/cert.pem
      SSLCertificateKeyFile /usr/local/etc/ssl/acme/private/domain.com/privkey.pem
      SSLCertificateChainFile /usr/local/etc/ssl/acme/domain.com/fullchain.pem
      Header set Strict-Transport-Security "max-age=31536000; includeSubDomains"
      ErrorLog /var/log/domain-error_log
      CustomLog /var/log/domain-access_log combined
</VirtualHost>

save and restart, look for any errors (typos on directory paths etc. will be detected and apache won’t restart, but be aware, it won’t quit either).

# apachectl restart
 Performing sanity check on apache24 configuration:
 Syntax OK
 Stopping apache24.
 Waiting for PIDS: 81160.
 Performing sanity check on apache24 configuration:
 Syntax OK
 Starting apache24.

Navigate to https://domain.com/ and check out your new green lock.  Check security and you should find:

W00T!

Acme-Client Options

# man acme-client has all the deets, but we’re using:

  • -m to append the domain name to paths, use this and always use it or never.
  • -v for verbose output so we can see what is going on.
  • -n to check if an account key exists and create if not (no reason to omit)
  • -N to check if a domain key exists and create if not (also no reason to omit)
  • -C to specify the path to the challenge dir.  These guides all assume a centralized challenge dir outside the main serving path, and to which we redirect via an alias directive.
  • -F which forces the recreation of certs even if they haven’t expired (this counts against your 10 per 3 hours limit)
  • -s which redirects the process to the Let’s Encrypt staging server, which has no volume limits but also doesn’t create certs browsers accept.  (Using this is fine, but requires cleanup to switch to the production server, see below)
  • -e which is used to add a SAN to the certificate.  Removing one is a bit more involved (see below).

Automating Registration

Lets say you have a lot of domains, you might want to automate the process.  I modified the renewal script to automate the registration process.  This saved some time, but one quirk is you can only register 10 domains (certificates, including SANs, basically 10 lines of the domains list) per 3 hours (they say-I found it takes more like 12 hours to be allowed to register more).

First create  a file with all the domains you want to register for a Let’s Encrypt certificate in the same format as the renewal script uses (it can be the same file, but I made it different as I was experimenting)

# ee /usr/local/etc/acme/newdomains.txt
domain.com www.domain.com
domain2.com www.domain2.com
domain3.com www.domain3.com 
(save)

# ee /usr/local/etc/acme/acme-client-bulk-add.sh

#!/bin/sh

###
#
# This script was adapted by Richard Fassett from letskencrypt.sh
# by Bernard Spil
# See https://brnrd.eu/security/2016-12-30/acme-client.html
#
# and updated again from richard fassett's script at
# https://www.richardfassett.com/2017/01/16/using-lets-encrypt-with-acme-client-on-a-freebsd-11apache-2-4/#comment-282
#
# this requires a file called /usr/local/etc/acme/newdomains.txt of the format
# domain.tld sub.domain.tld alt.domain.tld
# domain2.tld 
# domaind3.tld sub.domain3.tld 
# etc
#
# This should only be run to bulk-add domains.
###

# Define location of dirs and files
DOMAINSFILE="/usr/local/etc/acme/newdomains.txt"
CHALLENGEDIR="/usr/local/www/.well-known/acme-challenge"

# Loop through the newdomains.txt file with lines like
# example.org www.example.org img.example.org
cat ${DOMAINSFILE} | while read domain subdomains ; do

  # Create the cert directory with the command
  # acme-client -mvnNC /usr/local/www/.well-known/acme-challenge (domain subdomains)
  
  acme-client -mvnN -C "${CHALLENGEDIR}" ${domain} ${subdomains}

done

# chmod +x /usr/local/etc/acme/acme-client-bulk-add.sh

A few fixes/recoveries that might be useful at this point: add SAN, remove SAN, switch from staging to production Let’s Encrypt servers.

Automation can break things, you might find you adjusted a few domains incorrectly or want to add a SAN later.

If you need to redo a domain from scratch, for example if you use the “s” option which created a cert from the staging server that doesn’t have volume limits (maybe you’re testing a lot of domains or trying to debug a particularly tricky .htaccess or DNS condition) – you might create a domain with acme-client -mvnsNC /usr/local/www/.well-known/acme-challenge domain.com www.domain.com and then want to generate the production cert.  You also need to do this to remove a SAN.  If you try without deleting the directories, you’ll get something like unknown SAN entry. (You replace “domain.com” with your domain.)

# setenv DD domain.com
# rm -r /usr/local/etc/ssl/acme/private/$DD && rm -r /usr/local/etc/acme/$DD && rm -r /usr/local/etc/ssl/acme/$DD && acme-client -mvnFNC /usr/local/www/.well-known/acme-challenge $DD www.$DD

If you need to add a new SAN to an existing domain

acme-client -mvneFNC /usr/local/www/.well-known/acme-challenge domain.com www.domain.com newsub.domain.com

it is the -e that “extends” the certificate.

Step 5: Automating Renewal

You might notice that the duration of the certificate is rather short: 3 months.  You really don’t want to be responding to certificate expired errors every 3 months, so let’s automate the renewal process.  For this you can create two files and store them on your server.  One is the renewal script itself and the other is a list of domains to renew.  This assumes you have more than one domain.  If you only have one domain, this is a bit overkill, but it will work, so why not?  You might get more domains in the future.   Everyone does.

First create a file with your list of domains, call it something creative like “domains.txt”  This is really a certificate request list with the “primary” domain and Subject Alternative Names (SANs) each on a single line.  In theory the SANs can be all over the place and Let’s Encrypt allows up to 100 per certificate (quite a lot), so the implication of “domains.txt” naming is a bit inaccurate, but that’s what everyone is using so we won’t be contrary.  You have to make sure that all the subdomains resolve—the Let’s Encrypt servers are going to look them up via DNS and if there aren’t working entries, this will fail with one of the errors above.  Check first.  I have not tested whether, if for example, you own domain.com, domain.org, and domain.net and they all point to the same directory, you can use one cert with different TLDs (or domains) as SANs; you should be able to, but I didn’t try.

# ee /usr/local/etc/acme/domains.txt

domain.com www.domain.com sub.domain.com sub2.domain.com
domain.org www.domain.org
domain2.com www.domain2.com cats.domain2.com kittens.domain2.com

Now that you’ve saved that, the following script is adapted from a few at the references listed above and works on my server.  I made a few adjustments and corrections (there was a name change for acme-client which hasn’t quite propagated through all the HowTos yet).

# ee /usr/local/etc/acme/acme-client-update.sh

#!/bin/sh

###
#
# This script was adapted from letskencrypt.sh by Bernard Spil
# See https://brnrd.eu/security/2016-12-30/acme-client.html
# ... and further modified by David Gessel  
# This script will fail if the directories haven't been set up and
# domains in domain.txt have been successfully verified
#
###

# Define location of dirs and files
DOMAINSFILE="/usr/local/etc/acme/domains.txt"
CHALLENGEDIR="/usr/local/www/.well-known/acme-challenge"

# is changed to 1 if any domains expired and were renewed
CHECKEXPIRATION=0

# Loop through the domains.txt file with lines like
# example.org www.example.org img.example.org
cat ${DOMAINSFILE} | while read domain subdomains ; do

    # acme-client returns RC=2 when certificates 
    # weren't changed; use set +e to capture the return code
    set +e
    # Renew the key and certs if required
    acme-client -mvb -C "${CHALLENGEDIR}" ${domain} ${subdomains}
	RC=$?
	
   # now that we have the return code, set script to exit if 
   # nonzero is returned
   set -e

   # if anything is expired, we'll want to do something 
   # (e.g., restart HTTPS)
   if [ $RC -ne 2 ] ; then
        CHECKEXPIRATION=1
   fi
done

if [ "$CHECKEXPIRATION" -ne "0" ] ; then
        service apache24 restart
fi

# chmod +x /usr/local/etc/acme/acme-client-update.sh

This works quite well and will walk through your domains and renew as needed.

I have 36 domain/certificate lines in my “domains.txt” file and timing this script it takes 2.13 seconds to execute on my server.  There’s no real problem running it every night and if you have a lot of domains, you should remember you can only get 10 certs at a time and they won’t renew for about a week before expiry, a limitation I ran into in the bulk setup process.  You can spread your domain renewals out over the three months by force renewing blocks of them if you have more than about 60 per server.

You probably want to automate the process as a cron job. But before we do, lets address one more little problem: one of the shortcomings of the script process below is that the output messages of the script are output to stdout and only cron’s stderr is emailed to the admin. If your shell environment is wrong or the path to the script is wrong, cron will tell you, but if your domains don’t resolve or the script can’t reach /.well-known/, you will not get any warnings. That’s might be a bummer. So I redirect the output of the client-update.sh script to a log file. It gets overwritten with each execution, so it doesn’t need to be rotated – it is just the output of the last execution. It should be filled with lines including “adding SAN” (which it tells you for each domain) and “certificate valid” which it tells you for each cert that doesn’t need to be renewed. But it might tell you something else, like it barfed trying to reach the /.well-known/ directory because, say, you messed around with .htaccess or forgot to renew your domain and it is being redirected to parking or something. The following script first checks to see if there are any lines in /var/log/lets-encrypt-renew other than the expected, and if so, emails just those lines. You shouldn’t get anything until renewal time or if there’s an error. If you don’t care about renewal notices, you can edit the script to ignore those too.

# ee /usr/local/etc/acme/acme-client-errors.sh

#!/bin/sh

###
# this script scans the log file created by the renewal execution cron job
# then removes any lines containing "adding SAN" or "certificate valid", which
# are normal messages, and mails whatever is left over using the "mail" command
# check full paths (or use relative) but full paths can avoid some errors
# use "# which grep" and "# which mail" on your system to check.

PROBLEM=0

/usr/bin/grep -v "adding SAN" /var/log/lets-encrypt-renew | \
/usr/bin/grep -v "certificate valid" | /usr/bin/cat | \
{ while read status
  do
       PROBLEM=1
  done

  if [ "$PROBLEM" -ne "0" ] ; then
        /usr/bin/grep -v "adding SAN" /var/log/lets-encrypt-renew | \
        /usr/bin/grep -v "certificate valid" | \
        /usr/bin/mail -s "Lets Encrypt Errors" gessel@blackrosetech.com $1
  fi
}

# chmod +x /usr/local/etc/acme/acme-client-errors.sh

My cron configuration is set up as

# crontab -e

#*     *     *   *    *        command to be executed
#-     -     -   -    -
#|     |     |   |    |
#|     |     |   |    +----- day of week (0 - 6) (Sunday=0)
#|     |     |   +------- month (1 - 12)
#|     |     +--------- day of        month (1 - 31)
#|     +----------- hour (0 - 23)
#+------------- min (0 - 59)

MAILTO=gessel
# expanded path
PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/games:/usr/local/sbin:/usr/local/bin
SHELL=/bin/csh
# Let's Encrypt renewal check
*       3       *       *       *        /usr/local/etc/acme/acme-client-update.sh > & /var/log/lets-encrypt-renew
*       4       *       *       *        /usr/local/etc/acme/acme-client-errors.sh

Note that this requires that mail works. On servers that aren’t serving email, I use SSMTP and configured it more or less following this guide https://www.freebsd.org/doc/handbook/outgoing-only.html and https://www.davd.eu/freebsd-send-mails-over-an-external-smtp-server/ and this https://www.debarbora.com/freebsd-10-1-setup-ssmtp-for-outgoing-mail/ especially the tip about using # chpass to change the default Full Name for root from “Charlie &” to something useful like “ServerName Root.”

You can test the mail function by adding a random word (or domain) to your domains.txt file and then executing

# /usr/local/etc/acme/acme-client-update.sh > & /var/log/lets-encrypt-renew
# /usr/local/etc/acme/acme-client-errors.sh

If everything is set up right, you’ll get an email complaining about your random word not being valid.  If you restore the correct domains.txt file and execute the above two commands you should not get an email at all.

# more /var/log/lets-encrypt-renew

should show only lines with “adding SAN” and “certificate valid” in them. If you execute # /usr/local/etc/acme/acme-client-errors.sh you shouldn’t get any message.

.htaccess Problems

If you’re controlling access to a directory or have some non-HTML style process listening, you might run into challenges giving the Let’s Encrypt server access to the /.well-known/ directory.  I found the following formulation worked:

AuthType Basic
     AuthName "Please login."
     AuthUserFile "/xxx/.htpasswd"
     # the directive below also "requires" that the requested URL include /.well-known/
        Require expr %{REQUEST_URI} =~ m#^/.well-known/.*#
        Require valid-user

Basically the script above allows (requires) a “valid-user” (one with an entry in the AuthUserFile and valid matching password) and also requires (allows) a URL that is going to /.well-known/ and subdirectories thereof.  This also works in /usr/local/etc/apache24/httpd.conf and /usr/local/etc/apache24/extra/httpd-vhosts.conf

modRewrite to HTTPS problems

You can also create problems by rewriting to HTTPS.  You might want to do this now that you have certs that will auto-renew and you can provide a secure experience for everyone.   In order to get to the /.well-known/ directory, you have to add an exception to the mod-write rule for traffic to this subdirectory like so:

	RewriteEngine on
           RewriteCond %{REQUEST_URI} !^/\.well\-known/acme\-challenge/
	RewriteCond %{HTTPS} off
            RewriteRule (.*) https://%{SERVER_NAME}/$1 [R=301,L]

Also, if you redirect on a 404, some formulations cause problems. This one does not seem to:

ErrorDocument 404 /index.php


Posted at 06:43:58 GMT-0700

Category: FreeBSDHowToSecuritytechnology

10 Gbyte Win10 Spyware “upgrade” now forced on users

Sunday, September 27, 2015 

Microsoft has, historically, done some amazingly boneheaded things like clippy, Vista, Win 8, and Win 10.  They have one really good product: Excel, otherwise everything they’ve done has succeeded only through illegal exploitation of an aggressively defended monopoly. OK, maybe the Xbox is competitive, but I’m not much of a gamer.

Sadly for the world, the model of selling users for profit to advertisers and spies has gained ground to the point where Microsoft was starting to look like the least evil major entity in closed-source computing.  Poor microsoft.  To lose the evil crown must be at least as humiliating as their waning revenue and abject failures in the mobile space (so strange… try to enter a space where they don’t have a monopoly to force users to accept their mediocre crap and they fail, who’da thunk it?)

“There is a difference between policy and practice. We don’t read customers mail. We don’t read customer documents. We don’t triangulate YouTube views and searches. We don’t use the content of your Hotmail to target ads in Bing,”

Frank Shaw, Corporate Vice President of Corporate Communications for Microsoft

Well, never fear: Windows 10 is here and they’re radically one-upping the data theft economy by p0wning not just the data you idiotically entrust to someone else’s server for free without ever considering why they’re giving you that useful service for “free” or what they, or whoever buys their ultimately failed business, might do with your data, but also the data you consider too sensitive for the Google or the Apple.  Windows 10 exfiltrates all your data to Microsoft for their use and profit without your information.  Don’t believe it? Read their Privacy Statement.

Finally, we will access, disclose and preserve personal data, including your content (such as the content of your emails, other private communications or files in private folders), when we have a good faith belief that doing so is necessary.

And it is free (as in beer but not as in speech).  What could possiblay go wrong?

Well, people weren’t updating fast enough so Microsoft is now pushing that update on you involuntarily.  Do you have a data cap that a 10G download might break and cost you money?  So what!  Your loss!  Don’t have enough space on your drive for a 10G hidden folder of crapware foisted off on you without your permission?  Tough crap, Microsoft don’t care.

To be clear, Windows 10 is spyware.  If this was coming from a teenage hacker somewhere, they’d be facing jail time.  It is absolutely, unequivocally malware that will create a liability for you if you use it.  If you have any confidentiality requirement, you must not install windows 10.  Ever. Not even on your home machine.  Just don’t.

The only way to prevent this is really annoying and a little risky: disable automatic downloads.  One of the problems with Microsoft’s operating systems is the unbelievably crappy spaghetti code that results in a constant flow of cracks, a week’s worth are patched every Tuesday.  About 1 serious vulnerability every fortnight these days (note this is about the same as Ubuntu and about 1/4 the rate of OSX or iOS, why people think Apple products are “secure” is beyond me – live in that fantasy walled garden!  But nice logo you paid a 50% premium for on your shiny device). Not patching increases the risk that some hacker somewhere will steal your datas, but patching guarantees that Microsoft will steal your datas.  Keep your anti-virus up to date and live a little dangerously by keeping Microsoft out.

Here’s an interesting article: how-to-clean-the-windows-10-crapware-off-your-windows-7-or-81-pc

And a tool referenced in that article: GWX control panel (that can help remove the windows 10 infection if you got it).

And a list of patches I found that are related to Win10 malware that you can remove if you haven’t installed it yet (Windows 10 eliminates the ability to choose or selectively remove patches, once you’re in for the ride, you’re chained in: all or nothing.)

Basic advice:

  • Disable automatic updates and automatic downloads of updates.
  • Review each update Microsoft offers.  This is tedious, my win 7 install reports 384 updates, 5-10 a week, but other than security patches, you probably don’t really need them.  Only install a patch if there’s a reason.  Sorry, that sucks, but there’s always Linux Mint: free like beer AND free like speech.
  • If you’re still on Win 7/8, uninstall the spyware Microsoft has probably already installed.  If you’re on Windows 8, you probably want to upgrade to Windows 7 if at all possible.
  • If you succumbed to the pressure and became a Microsoft Product by installing Windows 10, uninstall it.
  • If uninstall doesn’t work, switch to Mint or reinstall 7.

Most importantly, if you develop software for servers or for end users, stop developing for Microsoft (and Apple too).  Respect the privacy of your customers by not exposing them to exploitation by desperate operating system vendors.  In many classes of applications, your customers buy their computers to run your software: they don’t care what operating system it requires – that should be transparent and painless.  Microsoft is no longer an even remotely acceptable choice.  Server applications should run under FreeBSD or OpenBSD and desktop applications should run under Linux.  You can charge more and generate more profit because the total net cost for your customers will be lower.  Split the difference and give them a more reliable, more secure, and lower cost environment and make more money doing so.

Posted at 08:07:54 GMT-0700

Category: FreeBSDHowToLinuxSecuritytechnology

Successful connect to WPA2 with Linux Mint 17

Saturday, September 26, 2015 

I found myself having odd problems connecting to WPA2 encrypted wireless networks with a new laptop.  There must be more elegant solutions to this problem, but this worked for me.  The problem was that I couldn’t connect to a nearby hotspot secured with WPA2 whether I used the default config tool for mint, Wicd Network Manager, or the command line.  Errors were either “bad password” or the more detailed errors below.

As with any system variation mileage may vary, my errors look like:

wlan0: CTRL-EVENT-SCAN-STARTED 
wlan0: SME: Trying to authenticate with 68:72:51:00:26:26 (SSID='WA-bullet' freq=2462 MHz)
wlan0: Trying to associate with 68:72:51:00:26:26 (SSID='WA-bullet' freq=2462 MHz)
wlan0: Associated with 68:72:51:00:26:26
wlan0: CTRL-EVENT-DISCONNECTED bssid=68:72:51:00:26:26 reason=3 locally_generated=1

and my system config is reported as:

# lspci -vv |grep -i wireless
3e:00.0 Network controller: Intel Corporation Wireless 7260 (rev 6b)
 Subsystem: Intel Corporation Dual Band Wireless-AC 7260
# uname -a
Linux dgzb 3.16.0-38-generic #52~14.04.1-Ubuntu SMP Fri May 8 09:43:57 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

I found useful commands for manually setting up a wpa_supplicant.conf file here, and for disabling 802.11n here. The combination was needed to get things working.

The following successfully connects to a WPA2-secured network:

$ sudo su
$ iw dev
 ... Interface [interfacename] (typically wlan0, assumed below)
$ iw wlan0 scan
 ... SSID: [ssid]
 ... RSN: (if present means the network is secured with WPA2)
$ wpa_passphrase [ssid] >> /etc/wpa_supplicant.conf 
...type in the passphrase for network [ssid] and hit enter...
$ sh -c 'modprobe -r iwlwifi && modprobe iwlwifi 11n_disable=1'
$ wpa_supplicant -i wlan0 -c /etc/wpa_supplicant.conf

(should show CTRL-EVENT-CONNECTED)
(open a new terminal leaving the connection open, ending the command disconnects)

$ sudo su
$ dhclient wlan0

(should be connected now)

Posted at 10:16:28 GMT-0700

Category: HowToLinuxtechnology

Disk Checks for Large Arrays

Friday, August 21, 2015 

If you have a large array of disks attached to your server, which is obviously going to be running FreeBSD or OpenBSD if you care about security, stability, and scalability; there are some tricks for dealing with large numbers of disks (like having 227 4TB disks attached to a single host).

Using Bash (yes there are security issues, but it is powerful)

# for i in `seq 0 227`; do smartctl -t short /dev/da$i; sleep 15; done 1Thanks Jared

executes a short smart test on all disks. Smartctl seems to max out at 32 concurrent tests, so sleep 15 ensures the 3 minute tests are finishing before new ones are executed. If you’re in a hurry, sleep 5 should do the trick and ensure all of them execute.

to get results try something like:

# for i in `seq 0 227`; do echo "/dev/da$i"; smartctl -a /dev/da$i; sleep .5; done

Bulk Fixes

Problem with the disks – need to clear existing formatting?

unmount each disk

# for i in `seq 0 227`; do umount -f /dev/da$i; done

unlock (if needed)

# sysctl kern.geom.debugflags=0x10

Overwrite the start of each disk

# for i in `seq 0 227`; do dd if=/dev/zero of=/dev/da$i bs=1k count=100; done

Overwrite the end of each disk

# for i in `seq 0 227`; do dd if=/dev/zero of=/dev/da$i bs=1m oseek=`diskinfo da$i | awk '{print int($3 / (1024*1024)) - 4;}'`; done

Recreate GPT (for ZFS)

# for i in `seq 0 227`; do gpart create -s gpt /dev/da$i; sleep .5; done

Destroy multipaths

# for i in `seq 1 114`; do gmultipath destroy disk$i; done

Disable multipath completely

# for i in `seq 1 114`; do gmultipath destroy disk$i; done
# gmultipath unload
# mv /boot/kernel-debug/geom_multipath.ko /boot/kernel-debug/geom_multipath.ko.bad
# mv /boot/kernel/geom_multipath.ko /boot/kernel/geom_multipath.ko.bad

Footnotes

1 Thanks Jared
Posted at 12:52:56 GMT-0700

Category: FreeBSDHowTotechnology