# Audio Processing Workflow

Monday, April 18, 2022

I prefer local control of media data, the rent-to-listen approach of various streaming services is certainly convenient, but pay-forever, you get what we think you should models don’t appeal to me. Over the decades, I’ve converted various eras of my physical media to digital formats using different standards that were in vogue at the time and with different emphasis on various metadata tags yielding a rather heterogeneous collection with some annoying incompatibilities that sometimes show up, for example using the Music plugin with NextCloud streaming via Subsonic to Sublime Music or Ultrasonic on Android.  I spent some time poking around to find a set of tools that satisfied my preferences for organization and structure and filled in a missing gap or two; this is what I’m doing these days and what with.

The steps outlined here are tuned to my particular use case:

• Linux-based process.
• I prefer mp3 to aac or flac because the format is widely compatible.  mp3 is pretty clearly inferior to aac for coding efficiency (aac produces better sound with less bits) and aac has some cool features that mp3 doesn’t but for my use compatibility wins.
• My ears ain’t what they used to be.  I’m not sure I could ever reliably have heard the difference between 320 CBR and 190 VBR, but I definitely can’t now and less data is less data.
• I like metadata and the flexibility in organization it provides, and like it standardized.

So to scratch that itch, I use the following steps:

• Convert FLAC/high-data rate mp3s to VBR (about 190 kbps) with ffmpeg
• Fix MP3 meta info wierdsies with MP3 Diags
• Add Replay Gain tags with loudness-scanner
• Add BPM tags with bpm-tag from bpm-tools
• Use Puddletag to:
• Clean any stray tags
• Assign Genre, Artist, Year, Album, Disk Number, Track, Title, & Cover
• Apply a standard replace function to clean text of weird characters
• Refile and re-name in a most-os-friendly way
• Clean up any stray data in the file system.

Links to the tools at the bottom.

### Convert FLAC to MP3 with ffmpeg

The standard tool for media processing is ffmpeg.  This works for me:

find . -depth -type f -name "*.flac" -exec ffmpeg -i {} -q:a 2  -c:v copy -map_metadata 0 -id3v2_version 3 -write_id3v1 1  {}.mp3 \;

A summary:

find                  unix find command to return each found file one-by-one
.                     search from the current directory down
-depth                start at the bottom and work up
-type f               find only files (not directories)
-name "*.flac"        files that end with .flac
-exec ffmpeg          pass each found file to ffmpeg
-i {}                 ffmpeg takes the found file name as input
-q:a 2                VBR MP3 170-210 kbps
-c:v copy             copy the video stream (usually the cover image)
-map_metadata 0       copy the metadata from input to global metadata of output
-id3v2_version 3      write ID3v2.3 tag format (more compatible than ID3v2.4)
-write_id3v1 1        also write old style ID3v1 tags (maybe useless)
{}.mp3 \;             write output file (which yields "/original/filename.flac.mp3")


For album encodes with a .cue or in other formats where the above would yield one giant file, Flacon is your friend.  I would use two steps: single flac -> exploded flac, then the ffmpeg encoder myself just for comfort with the encoding parameters.

### Convert high data rate CBR MP3 to VBR

Converting high data rate CBR files requires a bit more code to detect that a given file is high data rate and CBR, for which I wrote a small bash script that leverages mediainfo to extract tags from the source file and validate.

#!/bin/bash

# first make sure at least some parameter was passed, if not echo some instructions
if [ $# -eq 0 ]; then echo "pass a file name or try: # find . -type f -name "*.mp3" -exec recomp.sh {} \;" exit 1 fi # assign input 1 to “file” to make it a little easier to follow file=$1

# get the media type, the bitrate, and the encoding mode and assign to variables
type=$(mediainfo --Inform='General;%Format/String%' "$file")
brate=$(mediainfo --Inform='General;%OverallBitRate/String%' "$file" |& grep -Eo [0-9]+)
mode=$(mediainfo --Inform='Audio;%BitRate_Mode/String%' "$file")

# first check: is the file an mpeg audio file, if not quit
if [[ "$type" != "MPEG Audio" ]]; then echo$file skipped, not valid audio
exit 0
fi

# second check: if the file is already VBR, move on.
if [[ "$mode" = "Variable" ]]; then echo$file skipped, already variable
exit 0
fi

# third check: the output will be 170-210, no reason to expand low bit rate files
if [[ "$brate" -gt 221 ]] then ffmpeg -hide_banner -loglevel error -i "$file"  -q:a 2  -c:v copy -map_metadata 0 -id3v2_version 3 -write_id3v1 1  "${file}.mp3" rm "${file}"
mv "${file}.mp3" "${file}"
echo $file recompressed to variable fi exit  I named this script “~/projects/recomp/recomp.sh” and call it with find . -depth -type f -name "*.mp3" -exec ~/projects/recomp/recomp.sh {} \;  which will scan down through all sub-directories and find files with .mp3 extensions, and if suitable, re-compress them to VBR as above. Yes, this is double lossy and not very audiophile, definitely prioritizing smaller files over acoustic fidelity which I cannot really hear anyway. ### Fix bad data with MP3 Diags MP3 Diags is a GUI tool for cleaning up bad tags. It is pretty solid and hasn’t mangled any of my files yet. It has two basic functions: passively highlight missing useful tags (replaygain, cover image, etc) and actively fix messed up tags which is a file-changing operation so make backups if needed. I generally just click the tools buttons “1”–”4″ and it seems to do the right thing. Thanks Ciobi! Install was easy on Ubuntu: sudo apt install mp3diags  ### Add ReplayGain Tags To bulk add (or update) ReplayGain tags, I find loudness-scanner very easy. I just use the droplet version and drop folders on it. The defaults do the right thing, computing track and album gain by folder. The droplet pops up a confirmation dialog which can be lost on a busy desktop, remember it. Click to apply the tags then wait for it to finish before closing that tag list window or it will seg fault. The only indication is in the command prompt window used to launch it, which shows “….” as it progresses and when the dots stop, you can close the tags window. I built it from source – these steps did the needful for me: git clone https://github.com/jiixyj/loudness-scanner.git cd loudness-scanner git submodule init git submodule update mkdir build cd build cmake .. make sudo make install  Then launch the droplet with ~/projects/loudness-scanner/build/loudness-drop-gtk  ### Add Beats Per Minute Tags Beats per minute calcs are mostly useful for DJ types, but I use them to easily sort music for different moods or for exercise. The calculation seems a bit arbitrary for things like speech or classical, but for those genres where BPM is relevant, bpm-tools seems to yield results that make sense. Install with sudo apt-get install libsox-fmt-mp3 bpm-tag  Then write tags with (the -f option overwrites existing tags). find . -name "*.mp3" -exec bpm-tag -f {} \;  ### Puddletag Back in my Windows days, I really liked MP3Tag. I was really happy to find puddletag, an mp3tag inspired linux variant. It’s great, does everything it should. I wish I had something like this for image metadata editing: the spreadsheet format is very easy to parse. One problem I had was the deunicode tool wasn’t decoding for me, so I wrote my own wee function to extend the functions.py by calling the unidecode function. only puddlestuff/functions.py needs to be patched to add this useful decode feature. UTF8 characters are well supported in tags, but not in all file structures and since the goal is compatibility, mapping them to fairly intelligible ASCII characters is useful. This works with the 2.1.1 version. Below is a patch file to show the very few changes needed. --- functions.py.bak 2022-04-14 13:58:47.937873000 +0300 +++ functions.py 2022-04-14 16:49:23.705786696 +0300 @@ -43,6 +43,7 @@ from mutagen.mp3 import HeaderNotFoundError from collections import defaultdict from functools import partial +from unidecode import unidecode import pyparsing @@ -769,6 +770,10 @@ cleaned_fn = unicodedata.normalize('NFKD', t_fn).encode('ASCII', 'ignore') return ''.join(chr(c) for c in cleaned_fn if chr(c) in VALID_FILENAME_CHARS) +# hack by David Gessel +def deunicode(text): + dutext = unidecode(text) + return (dutext) def remove_dupes(m_text, matchcase=False): """Remove duplicate values, "Remove Dupes:$0, Match Case $1" @@ -1126,7 +1131,8 @@ 'update_from_tag': update_from_tag, "validate": validate, 'to_ascii': to_ascii, - 'to_num': to_num + 'to_num': to_num, + 'deunicode': deunicode } no_fields = [filenametotag, load_images, move, remove_except,  I use the “standard” action to clean up file names with a few changes: • In “title” and “album” I replace ‘ – ‘ with ‘–‘ • in all, I RegExp replace ‘(\s)’ with ‘ ‘ – all blank space with a regular space. • I replace all %13 characters with a space • I RegExp ‘(\s)+’ with ‘ ‘ – all blank runs with a single space • Trim all to remove leading and ending spaces. My tag->filename function looks like this craziness which reduces the risk of filename misbehavior on most platforms: ~/$validate(%genre%,_,/\*?;”|: +<>=[])/$validate($deunicode(%artist%),_,/\*?;”|: +<>=[])/%year%--$left($validate($deunicode(%album%),_,/\*?;”|: +<>=[]),136)$if(%discnumber%, --D$num(%discnumber%,2),"")/$left($num(%track%,2)--$validate($deunicode(%title%),_,/\*?;”|: +<>=[]),132)  Puddletag is probably in your repository. To mod the code, I first installed from source per the puddletag instructions, but had to also add unidecode to my system with pip install unidecode  ### Last File System Cleanups The above steps should yield a clean file structure without leading or trailing spaces, indeed without any spaces at all, but in case it doesn’t the rename function can help. I installed it with sudo apt install rename  This is useful to, for example, normalize errant spelling of mp3 – for example Mp3 or MP3 or, I suppose, mP3. find . -depth -exec rename 's/\.mp3$/.mp3/i' {} +
aside from parameters explained previously
's/A/B/'            substitute B for each instance of A
\.                  escaped "." because "." has special meaning
$match end of string - so .mp3files won't match, but files.mp3 does i case insensitive match (.mp3 .MP3 .mP3 .Mp3 all match)  The following commands clean up errant spaces before after and repeated: find . -depth -exec rename 's/^ *//' {} + find . -depth -exec rename 's/ *$//' {} +
find . -depth -exec rename 's/\s+/_/g' {} +


If moving files around results in empty directories (or empty files, which shouldn’t happen) then they can be cleaned with

find . -depth -type d -empty -print -delete
find . -depth -type f -empty -print -delete


### Players

If workflow preferences are highly personal, player prefs seem even more so.  Mine are as follows:

#### For Local Playback on a PC: Quod Libet

I like to sort by genre, artist, year, and album and Quod Libet makes that as easy as in foobar2000 did back in the olde days when Windows was still an acceptable desktop OS.  Those days are long, long over and while I am still fond of the foobar2000 approach, Quod Libet doesn’t need Wine.

Alas, one shortcoming still is that Quod Libet does not support subsonic or ampache.  That’s too bad because I really like the UI/UX.

#### For Subsonic Streaming on a PC: Sublime Music

Not the text editor, the music app.  It is pretty good, more pretty than Quod Libet and in a way that doesn’t entirely appeal to me, but it seems to work fairly well with NextCloud and is the best solution I’ve found so far.  It tends to flow quite a few errors and I see an odd bug where album tile selection jumps around, but it seems to work and a local program linking back to a server is generally more performant than in browser, but that’s also an option (see below) or run foobar2000 in Wine, perhaps even as an (ugh!) snap.

#### In Browser: NextCloud Music

Nextcloud’s Music app is one of those that imposes a sorting model that doesn’t work for me – not at all foobar2000ish – and so I don’t really use it much, but there are times, working on site for example, that a browser window is easiest.  I find I often have to rebuild the music database after changes.  Foam or Ample might be more satisfying choices functionally and aesthetically and can connect to the backend provided by Music.

#### Mobile: Ultrasonic

Ultrasonic works pretty well for me and seems to connect fairly reliably to my NextCloud server even in low bandwidth situations (though, obviously, not fast enough to actually listen to anything, but it doesn’t barf.)  Power Ampache might be another choice still currently developed (but I haven’t tried it myself).  Subsonic also worked with NextCloud, but I like Ultrasonic better and it is still actively developed.

If you’re on iOS instead of Android (congratulations on the envy your overpriced corporate icon inspires in the less fortunate) you almost certainly stick exclusively with your tribal allegiance and have no need for media outside of iTunes/Apple TV approved content.

### Players:

Posted at 17:59:27 GMT-0700

Category: AudioHowToLinuxtechnology

# Save your email! Avoid the Thunderbird 78 update

Saturday, June 19, 2021

History repeats itself as the TB devs learn nothing from the misery they created by auto-updating 60x users to 68 without providing any warning or option to avoid the update.  This is crappy user management.  On updates that will break an installed add-on, the user should be informed of what will be disable and asked if they want to proceed with the update, not silently forced to conform to a stripped-down, unproductive environment as if the user’s efforts at optimization were childish mistakes unworthy of consideration or notice.

The Thunderbird devs have increasingly adopted a “if you’re not doing it our way, you’re doing it wrong and we’re going fix your mistake whether you like it or not” attitude.  This is highly annoying because the org already alienated their add-on community by repeatedly breaking the interface models add-on developers relied on.

For a while add-on devs gamely played along dealing with reputational damage as idiotic and poorly planned actions by Thunderbird devs broke their code and left them to deal with user frustration and scrambled to fix problems they didn’t create.  Many, if not by now most, add-on developers finally had enough and abandoned ship.  This is tragic because without some of the critical modifications to Thunderbird provided by developers it is essentially unusable.

I eventually came to peace with the add-on-pocolypse between 60 and 68 as add on developers worked through it and very carefully set my TB 68 to not update ever again, even though 90a finally fixes the problem that 68 caused where it became impossible to display dates in ISO 8601 format, but that’s a whole ‘nother kettle of fish.

Still, despite trying to block it, I got a surprise update; if this keeps up, I’ll switch to Interlink Mail and News.

So if you, like I did, got force “upgraded” to 78 from a nicely customized 68, this is what worked for me to undo the damage: (If you weren’t surprise updated, then jump right down to preventing future surprises.)

• Uninstall thunderbird (something like # sudo apt remove thunderbird)
• Extract the tar file and copy it (sudo) to /usr/lib/thunderbird
sudo mv ~/downloads/thunderbird/ /usr/lib/thunderbird
• Create a desktop entry

# nano ~/.local/share/applications/tb68.desktop

[Desktop Entry]
Version=1.0
Type=Application
Name=Thunderbird-68
Icon=thunderbird
Exec="/usr/lib/thunderbird/thunderbird"
Comment=last TB version
Categories=Application;Network;Email;
Terminal=false
MimeType=x-scheme-handler/mailto;application/x-xpinstall;
StartupNotify=true
Actions=Compose;Contacts

• Prevent future updates (hopefully) by creating a no-update policy file:
# sudo nano /usr/lib/thunderbird/distribution/policies.json

{
"policies": {
"DisableAppUpdate": true
}
}

and then, just to be sure, break the update checker code:

# sudo mv /usr/lib/thunderbird/updater /usr/lib/thunderbird/no-updater
# sudo mv /usr/lib/thunderbird/updater.ini /usr/lib/thunderbird/no-updater.ini
• Start the freshly improved and downgraded to the last remotely usable version of Thunderbird with a special downgrade allowed option the first time from the command line:
# /usr/lib/thunderbird/thunderbird --allow-downgrade

If you were unlucky enough to launch TB 78 even once, your add-ons are screwed up now (thanks devs, Merry Christmas to you too).  Those that have a 78 compatible version will have been auto-updated to the 78 version which isn’t compatible with 68 (w00t w00t, you can see why the plugin devs quit in droves).  At least this time your incompatible add-ons weren’t auto-deleted like with 68.  Screen shot or otherwise capture a list of your disabled plugins, then remove the incompatible ones and add them back to the 68-compatible previous release.

If the “find plugins” step doesn’t find your 68 plugin (weird, but it happens) then google it and download the xpi and manually add it.

• Restart one more time normally to re-enable the 68 compatible add-ons without 78 updates that the 78 launch disabled.

One more detail – if find your CardBook remote address books are gone, you need to rebuild your preferences.

• Find your preferences folder: help->Troubleshooting Information-> about:profiles -> Open Directory
• Back up your profile (good thing to do no matter what)
• Uninstall the CardBook plugin
• Quit TB
• In your profiles directory, delete all files that end with .sqlite (rm *.sqlite)
• Restart TB (the .sqlite files should be recreated)
• Reinstall the CardBook plugin.  Your address books should reappear.  (if not, the advice on the interwebs is to create a new profile and start over).

PHEW! just a few hours of lost time and you’ve fixed the misery the TB devs forced on you without asking.  How nice.  What thoughtful people.

[poll id=”2″]

Posted at 07:16:58 GMT-0700

Category: HowToLinuxNegativereviewsSecuritytechnology

# Compile and install Digikam on Ubuntu 20.04 Focal (21.10 too)

Friday, March 26, 2021

Digikam is an incredibly powerful media management tool that integrates a great collection of powerful media processing projects into a single, fairly nice and moderately intuitive user interface. The problem is that it make use of SO many projects and libraries that installation is quite fragile and most distributions are many years out of date – that is a typical sudo apt install digikam will yield version 4.5 while release is (as of this writing) 7.5.

In particular, this newer version has face detection that runs LOCALLY – not on Google or Facebook’s servers – meaning you don’t have to trade your personal photos and all the data implicit in them to a data broker to make use of such a useful tool.  Sure, Google once bought and then improved Picasa Desktop which gave you this function, but then they realized this was cutting into their data harvesting business and discontinued Picasa and tried to convince people to let them look at all their pictures with Google Photos.  We really, really need to make personal data a toxic asset, such an intolerable liability that any company that holds any personal data has negative value.  But until then, use FOSS software on your own hardware where ever possible.

You can compile the latest version on Ubuntu 20.04 Focal Fossa, though not exactly painlessly, or you can install the flatpak easily. I hate flatpaks with a passion, so I went through the exercise and found what appears to be stable success with the following procedure which yielded a fully featured digikam with zero dependency errors or warnings and all features enabled using MariaDB as a backend.

Updating Ubuntu from 20.04 to 21.10 (probably any other major update too) will (as typical) break a ton of stuff.  For “reasons” the updater uninstalls all sorts of things like MariaDB and many of the dependencies.  Generally, as libraries change versions, recompiling is required.  This is so easy with FreeBSD ports…

### Install and configure MariaDB

sudo apt update
sudo apt install mariadb-server
sudo mysql_secure_installation

The secure options are all good, accept them unless you know better.

Start the server (if it isn’t)

sudo systemctl start mariadb.service
sudo systemctl enable mariadb --now
sudo systemctl status mariadb.service


Do some really basic config:

sudo nano /etc/mysql/mariadb.conf.d/50-server.cnf


and set:

character-set-server = utf8mb4
collation-server = utf8mb4_general_ci
default_storage_engine = InnoDB


Switch to mariadb and create an admin user account and (I’d suggest) one for digikam.  It seems this has to be done before the first connect and can’t be fixed after.  You’ll probably want to use a different ‘user’ than I did, but feel free.

sudo mariadb
CREATE USER 'gessel'@'localhost' IDENTIFIED BY 'password';
GRANT ALL ON *.* TO 'gessel'@'localhost' IDENTIFIED BY 'password';
CREATE DATABASE digikam;
GRANT ALL PRIVILEGES ON digikam.* TO 'gessel'@'localhost';
FLUSH PRIVILEGES;


should correctly create the correct user – though check the instructions tab on the database connection options pane for any changes if you’re following these instructions for install of a later version. You will need the socket location to connect to the database so before exit; run:

mysqladmin -u admin -p version


Should yield something like:

Enter password:
mysqladmin  Ver 9.1 Distrib 10.3.25-MariaDB, for debian-linux-gnu on x86_64
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Protocol version	10
Connection		Localhost via UNIX socket
UNIX socket		/var/run/mysqld/mysqld.sock
Uptime:			5 hours 26 min 6 sec

Threads: 29  Questions: 6322899  Slow queries: 0  Opens: 108  Flush tables: 1  Open tables: 74  Queries per second avg: 323.157


And note the value for UNIX socket, you’re going to need that later: /var/run/mysqld/mysqld.sock – yours might vary.

### Install digiKam Dependencies

#### Updates 2021-10-30 🎃

• Updated to libx264-163 and libx265-199
• Added libopencv-dev dependency
• Version change from 7.2.0 to 7.3.0

• Installing on Ubuntu 21.10 “impish”
• Version change to 7.5.0 (note camelcase used for file name now, “digiKam” not “digikam“)
• Problem with libopencv-dev required selecting a # sudo aptitude install solution to get past a libilmbase-dev but it is not installable error.

Digikam has just a few dependencies.just a few... the below command should install the needed for 7.30 on Ubuntu 21.10. Any other version combination might be different.:

sudo aptitude install \
bison \
checkinstall \
devscripts \
doxygen \
extra-cmake-modules \
ffmpeg \
ffmpegthumbnailer \
flex \
graphviz \
help2man \
jasper \
libavcodec-dev \
libavdevice-dev \
libavfilter-dev \
libavformat-dev \
libavutil-dev \
libboost-dev \
libboost-graph-dev \
libeigen3-dev \
libexiv2-dev \
libgphoto2-dev \
libjasper-dev \
libjasper-runtime \
libjasper4 \
libjpeg-dev \
libkf5calendarcore-dev \
libkf5contacts-dev \
libkf5doctools-dev \
libkf5kipi-dev \
libkf5notifyconfig-dev \
libkf5sane-dev \
libkf5solid-dev \
libkf5xmlgui-dev \
liblcms2-dev \
liblensfun-dev \
liblqr-1-0-dev \
libmagick++-6.q16-dev \
libmagick++-6.q16hdri-dev \
libmagickcore-dev \
libmarble-dev \
libqt5opengl5-dev \
libqt5sql5-mysql \
libqt5svg5-dev \
libqt5webkit5-dev \
libqt5webview5 \
libqt5webview5-dev \
libqt5x11extras5-dev \
libqt5xmlpatterns5-dev \
libqtav-dev \
libqtwebkit-dev \
libswscale-dev \
libtiff-dev \
libusb-1.0-0-dev \
libx264-163 \
libx264-dev \
libx265-199 \
libx265-dev \
libxml2-dev \
libxslt1-dev \
marble \
pkg-kde-tools \
qtbase5-dev \
qtbase5-dev-tools \
qtmultimedia5-dev \
qtwebengine5-dev \
libopencv-dev \
qtwebengine5-dev-tools


### Compile Digikam

Switch to your projects directory (~/projects, say) and get the source, cross your fingers, and go to town. The make -j4 command will take a while to compile everything.  There are two basic mechanisms for getting the source code: wget the taball or git pull the repository.

Check the latest version at https://download.kde.org/stable/digikam/ It was 7.2.0, but is now 7.3.0 and will, certainly change again. This is currently a 255.3 MB download (!).

wget https://download.kde.org/stable/digikam/7.5.0/digiKam-7.5.0.tar.xz
tar -xvf digiKam-7.5.0.tar.xz
cd digiKam-7.5.0.tar.xz


#### git pull the repository

Git uses branches/tags so check the pull down list of latest branches and tags at the top left, below the many, many branches is the tag list at https://invent.kde.org/graphics/digikam/-/tree/v7.5.0 , latest on top, and currently 7.5.0. This is currently a 1.4 GB git pull (!!).
There was an issue in the v7.3.0 tag that caused built to fail that was fixed in current, so building “stable” isn’t always the best choice for stability.

git clone -b v7.5.0 https://invent.kde.org/graphics/digikam.git
cd digikam


Then follow the same steps:

./bootstrap.linux
cd build
make -j4
sudo su
make install/fast


Compiling might take 15-30 minutes depending on CPU.  Adjust -jx to optimize build times, the normal rule of thumb is that x=# of cores or cores+1, YMMV, 4 is a reasonable number if you aren’t confident or interested in experimenting.

The ./bootstrap.linux result should be as below; if it indicates a something is missing then double check dependencies.  If you’ve never compiled anything before, you might need to install cmake and and some other basics not in the apt install list above:

-- ----------------------------------------------------------------------------------
--  digiKam 7.2.0 dependencies results   <https://www.digikam.org>
--
--  MySQL Database Support will be compiled.. YES (optional)
--  MySQL Internal Support will be compiled.. YES (optional)
--  DBUS Support will be compiled............ YES (optional)
--  App. Style Support will be compiled...... YES (optional)
--  QWebEngine Support will be compiled...... YES (optional)
--  libboostgraph found...................... YES
--  libexiv2 found........................... YES
--  libexpat found........................... YES
--  libjpeg found............................ YES
--  libkde found............................. YES
--  liblcms found............................ YES
--  libopencv found.......................... YES
--  libpng found............................. YES
--  libpthread found......................... YES
--  libqt found.............................. YES
--  libtiff found............................ YES
--  bison found.............................. YES (optional)
--  doxygen found............................ YES (optional)
--  ccache found............................. YES (optional)
--  flex found............................... YES (optional)
--  libakonadicontact found.................. YES (optional)
--  libmagick++ found........................ YES (optional)
--  libeigen3 found.......................... YES (optional)
--  libgphoto2 found......................... YES (optional)
--  libjasper found.......................... YES (optional)
--  libkcalendarcore found................... YES (optional)
--  libkfilemetadata found................... YES (optional)
--  libkiconthemes found..................... YES (optional)
--  libkio found............................. YES (optional)
--  libknotifications found.................. YES (optional)
--  libknotifyconfig found................... YES (optional)
--  libksane found........................... YES (optional)
--  liblensfun found......................... YES (optional)
--  liblqr-1 found........................... YES (optional)
--  libmarble found.......................... YES (optional)
--  libqtav found............................ YES (optional)
--  libthreadweaver found.................... YES (optional)
--  libxml2 found............................ YES (optional)
--  libxslt found............................ YES (optional)
--  libx265 found............................ YES (optional)
--  OpenGL found............................. YES (optional)
--  libqtxmlpatterns found................... YES (optional)
--  digiKam can be compiled.................. YES
-- ----------------------------------------------------------------------------------


### Launch and configure Digikam

(if you’re still root, exit root before launching # digikam)

The Configuration options are pretty basic, but note that to configure the Digikam back end you’ll need to use that MariaDB socket value you got before and the user you created like so UNIX_SOCKET=/var/run/mysqld/mysqld.sock:

On the first run, it will download about 350mb of code for the face recognition engine.  Hey – maybe a bit heavy, but you’re not giving the Google or Apple free lookie looks at all your personal pictures.  Also, if all this is a bit much (and, Frankly, it is) I’d consider Digikam one of the few applications that makes the whole flatpak thing seem somewhat justified.  Maybe.

### Some advice on tuning:

I recommend mysqltuner highly, then maybe check this out (or just leave it default, default works well).

Tuning a database is application and computer specific, there’s no one size fits any, certainly not all, and it may change as your database grows. There are far more expert and complete tuning guides available, but here’s what I do:

#### Pre-Tuning Data Collection

Tuning at the most basic involves instrumenting the database to log problems, running it for a while, then parsing the performance logs for useful hints. The mysqltuner.pl script is far more expert at than I’ll ever be, so I pretty much just trust it. You have to modify your mysqld.cnf file to enable performance data collection (which, BTW, slows down operation, so undo this later) which, for MariaDB, means adding a few lines:

sudo nano /etc/mysql/mariadb.conf.d/50-server.cnf
# enable performance schema to allow optimization, but ironically hit performance, so disable after tuning.
# in the [mysqld] section insert
performance_schema=ON
performance-schema-instrument='stage/%=ON'
performance-schema-consumer-events-stages-current=ON
performance-schema-consumer-events-stages-history=ON
performance-schema-consumer-events-stages-history-long=ON


I rather like this guide’s helpful instructions for putting the script in /usr/local/sbin/ so it is in the execution path:

sudo wget https://raw.githubusercontent.com/major/MySQLTuner-perl/master/mysqltuner.pl -O /usr/local/sbin/mysqltuner.pl
sudo chmod 700 /usr/local/sbin/mysqltuner.pl
sudo mysqltuner.pl


Then restart with sudo service mariadb restart then go about your business with digikam – make sure you rack up some real hours to gather useful data on your performance. Things like ingesting a large collection should generate useful data. I’d suggest doing disk tuning first because that’s hardware not load dependent.

#### Disk tuning

Databases tend to hammer storage and SSDs, especially SLC/enterprise SSDs, massively improve DB performance over spinning disks – unless you have a massive array of really good rotating drives. I’m running this DB on one spinning disk, so performance is very MEH. MySQL and MariaDB make some assumptions about disk performance which is used to scale some pretty important parameters for write caching. You can meaningfully improve on the defaults by testing your disk with a great linux utility called “fio”.

sudo apt install fio
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75


This will take a while and will give some very detailed information about the performance of your disk subsystem, the key parameters being average and max write IOPS. I typically create a # performance tuning section at the end of my [mysqld] section and before [embedded] and I’ll put these values in as, say: (your IOPS values will be different):

# performance tuning

innodb_io_capacity              = 170
innodb_io_capacity_max          = 286


and sudo service mariadb restart

#### Using mysqltuner.pl

After you’ve collected some data, there may be a list of tuning options.

sudo nano /etc/mysql/mariadb.conf.d/50-server.cnf

Mine currently look like this, but they’ll change as the database stabilizes and my usage patterns change.

# performance tuning

innodb_io_capacity              = 170
innodb_io_capacity_max          = 286

innodb_buffer_pool_size         = 4G
innodb_log_file_size            = 512M
innodb_buffer_pool_instances    = 4
skip_name_resolve               = 1
query_cache_size                = 0
query_cache_type                = 0
query_cache_limit               = 2M
max_connections                 = 175
join_buffer_size                = 4M
tmp_table_size                  = 24M
max_heap_table_size             = 24M
innodb_buffer_pool_size         = 4G
max_allowed_packet              = 128M


and

sudo service mariadb restart

Note max_allowed_packet = 128M comes from this guide. I trust it, but it isn’t a mysqltuner suggestion.

Posted at 17:11:21 GMT-0700

Category: HowToLinuxphotoPositivereviewstechnology

# Tagging MP3 Files with Puddletag on Linux Mint

Tuesday, March 23, 2021

A “fun” part of organizing an MP3 collection is harmonizing the tags so the datas work consistently with whatever management schema you prefer.  My preference is management by the file system—genre/artist/year/album/tracks works for me—but consistent metainformation is required and often disharmonious.  Finding metaharmony is a chore I find less taxing with a well structured tag editor and to my mind the ur-meta-tag manager is  MP3TAG.

The problem is that only works with that dead-end spyware riddled failing legacyware called “Windows.” Fortunately, in Linux-land we have puddletag, a very solid clone of MP3TAG.  The issues is that the version in repositories is (as of this writing) 1.20 and I couldn’t find a PPA for the latest, 2.0.1.  But compiling from source is super easy and works in both Linux Mint 19 and Ubuntu 20.04 (yay open source!):

1. Install pre-reqs to build (don’t worry, if they’re installed, they won’t be double installed)
2. get the tarball of the source code
3. expand it (into a reasonable directory, like ~/projects)
4. switch into that directory
5. run the python executable “puddletag” directly to verify it is working
6. install it
7. tell the desktop manager it’s there – and it should be in your window manager along with the rest of your applications.

The latest version as of this post was 2.0.1 from https://github.com/puddletag/puddletag

sudo apt install python3-pyqt5 python3-pyqt5.qtsvg python3-pyparsing python3-mutagen python3-acoustid libchromaprint-dev libchromaprint-tools libchromaprint1
tar -xvf puddletag-2.0.1.tar.gz cd puddletag-2.0.1/
cd puddletag
./puddletag
sudo python3 setup.py install
sudo desktop-file-install puddletag.desktop


A nice feature is the configuration directory is portable and takes your complete customization with you – it is an extremely customizable program so you can generally configure it as fits your mental model.  Just copy the entire puddletag directory located at ~/.configure/puddletag.

Posted at 15:19:01 GMT-0700

Category: AudioHowToLinuxPositivereviewsuncategorized

# Favicon generation script

Monday, December 21, 2020

Favicons are a useful (and fun) part of the browsing experience.  They once were simple – just an .ico file of the right size in the root directory.  Then things got weird and computing stopped assuming an approximate standard ppi for displays, starting with mobile and “retina” displays.  The obvious answer would be .svg favicons, but, wouldn’t’ya know, Apple doesn’t support them (neither does Firefox mobile) so for a few more iterations, it still makes sense to generate an array of sizes with code to select the right one.  This little tool pretty much automates that from a starting .svg file.

There are plenty of good favicon scripts and tools on the interwebs. I was playing around with .svg sources for favicons and found it a bit tedious to generate the sizes considered important for current (2020-ish) browsing happiness. I found a good start at stackexchnage by @gary, though the sizes weren’t current recommended (per this github project). Your needs may vary, but it is easy enough to edit.

The script relies on the following wonderful FOSS tools:

These are available in most distros (software manager had them in Mint 19).

Note that my version leaves the format as .png – the optimized png will be many times smaller than the .ico format and png works for everything except IE<11, which nobody should be using anyway.  The favicon.ico generated is 16, 32, and 48 pixels in 3 different layers from the 512×512 pixel version.

The command line options for inkscape changed a bit, the bash script below has been updated to reflect current.

The code below can be saved as a bash file, set execution bit, and call as ./favicon file.svg and off you go:

#!/bin/bash

# this makes the output verbose
set -ex

# collect the file name you entered on the command line (file.svg)
svg=$1 # set the sizes to be generated (plus 310x150 for msft) size=(16 32 70 128 150 152 167 180 192 310 512) # set the write director as a favicon directory below current out="$(pwd)"
out+="/favicon"
mkdir -p $out echo Making bitmaps from your svg... for i in${size[@]}; do
inkscape -o "$out/favicon-$i.png" -w $i -h$i $svg done # Microsoft wide icon (annoying, probably going away) inkscape -o "$out/favicon-310x150.png" -w 310 -h 150 $svg echo Compressing... for f in$out/*.png; do pngquant -f --ext .png "$f" --posterize 4 --speed 1 ; done; echo Creating favicon convert$out/favicon-512.png -define icon:auto-resize=48,32,16 $out/favicon.ico echo Done  Copy the .png files generated above as well as the original .svg file into your root directory (or, if in a sub-directory, add the path below), editing the “color” of the Safari pinned tab mask icon. You might also want to make a monochrome version of the .svg file and reference that as the “mask-icon” instead, it will probably look better, but that’s more work. The following goes inside the head directives in your index.html to load the correct sizes as needed (delete the lines for Microsoft’s browserconfig.xml file and/or Android’s manifest file if not needed.) <!-- basic svg --> <link rel="icon" type="image/svg+xml" href="/favicon.svg"> <!-- generics --> <link rel="icon" href="favicon-16.png" sizes="16x16"> <link rel="icon" href="favicon-32.png" sizes="32x32"> <link rel="icon" href="favicon-128.png" sizes="128x128"> <link rel="icon" href="favicon-192.png" sizes="192x192"> <!-- Android --> <link rel="shortcut icon" href="favicon-192.png" sizes="192x192"> <link rel="manifest" href="manifest.json" /> <!-- iOS --> <link rel="apple-touch-icon" href="favicon-152.png" sizes="152x152"> <link rel="apple-touch-icon" href="favicon-167.png" sizes="167x167"> <link rel="apple-touch-icon" href="favicon-180.png" sizes="180x180"> <link rel="mask-icon" href="/favicon.svg" color="brown"> <!-- Windows --> <meta name="msapplication-config" content="/browserconfig.xml" />  For WordPress integration, you don’t have access to a standard index.html file, and there are crazy redirects happening, so you need to append to your theme’s functions.php file with the below code snippet wrapped around the above icon declaration block (optimally your child theme unless you’re a theme developer since it’ll get overwritten on update otherwise): /* Allows browsers to find favicons */ add_action('wp_head', 'add_favicon'); function add_favicon(){ ?> REPLACE THIS LINE WITH THE BLOCK ABOVE <?php }; Then, just for Windows 8 & 10, there’s an xml file to add to your directory (root by default in this example) Also note you need to select a color for your site, which has to be named “browserconfig.xml <?xml version="1.0" encoding="utf-8"?> <browserconfig> <msapplication> <tile> <square70x70logo src="/favicon-70.png"/> <square150x150logo src="/favicon-150.png"/> <wide310x150logo src="/favicon-310x150.png"/> <square310x310logo src="/favicon-310.png"/> <TileColor>#ff8d22</TileColor> </tile> </msapplication> </browserconfig>  There’s one more file that’s helpful for mobile compatibility, the android save to desktop file, “manifest.json“. This requires editing and can’t be pure copy pasta. Fill in the blanks and select your colors { "name": "", "short_name": "", "description": "", "start_url": "/?homescreen=1", "icons": [ { "src": "/favicon-192.png", "sizes": "192x192", "type": "image/png" }, { "src": "/favicon-512.png", "sizes": "512x512", "type": "image/png" } ], "theme_color": "#ffffff", "background_color": "#ff8d22", "display": "standalone" }  Check the icons with this favicon tester (or any other). Manifest validation: https://manifest-validator.appspot.com/ Posted at 17:26:44 GMT-0700 Category: HowToLinuxself-publishing # CoreELEC/Kodi on a Cheap S905x Box Friday, September 11, 2020 I got a cheap Android TV box off Amazon a while back and consolidated and it ended up being spare. One of the problems with these TV boxes is they don’t get OS updates after release and soon video apps stop working because of annoying DRM mods. I figured I’d try to switch one to Linux and see how it went. There are some complications in that these are ARM devices (not x86/x64) and getting ADB to connect is slightly complicated due to it not being a USB device. There are a few variations of the “ELEC” (Embedded Linux Entertainment Center), OpenELEC, LibreELEC, and CoreELEC (at least). CoreELEC focuses on ARM boxes and makes sense for the cheapest boxes. What you get is a little linux-based device that boots to Kodi, runs an SMB server, and provides SSH access, basically an open source SmartTV that doesn’t spy on you. It is well suited to a media library you own, rather than rent, such as your DVD/Blu-Ray collection. Perhaps physical media will come back into vogue as the rent-to-watch streaming world fractures into a million individual subscriptions. Once moved to the FOSS world, there are regular updates and, generally, people committed to keeping things working and far better and far longer term support than OEM’s offer for random cheap android devices (or, even quite expensive ones). #### A: Download the CoreELEC image: You need to know the processor and RAM of your device – note that a lot of different brands are the same physical boxes. My “CoCoq M9C Pro” (not listed) is also sold as a “Bqeel M9C Pro” (listed) and the SOC (S905X) and RAM (1G) match, so the Bqeel image worked fine. The download helper at the bottom of the https://coreelec.org/ site got me the right file easily. #### B: Image a uSD or USB device for your wee box: I’m running Linux as my desktop so I used the LibreELEC Creator Tool and just selected the file downloaded previously. That part went easily. The only issue was that after imaging, Linux didn’t recognize the device and I had to systemctl --user restart gvfs-udisks2-volume-monitor before I could do the next step. #### C: One manual file move for bootability: ELEC systems use a “device tree” – a file with hardware descriptors that are device specific to get (most) of the random cheap peripherals working on these cheap boxes. You have to find the right one in the /Device Trees folder on your newly formatted uSD/USB device and copy it to the root directory and rename it dtb.img. #### D: Awesome, now what? Once you have your configured formatted bootable uSD/USB you have to get your box to boot off it. This is a bit harder without power/volume control buttons normally on Android devices. I have ADB set up to strip bloatware from phones. You just need one command to get this to work, reboot update, but to get it to your box is a bit of a trick. Most of these boxes are supplied rooted, so you don’t need to do that, but you do need a command line tool and to enable ADB. To enable ADB: navigate to the build number (usually Settings > System > About phone > Build number) and click it a bunch of times until it says “you’re a developer!” Then go to developer options and enable ADB. You can’t use it yet though because there’s no USB. Install a Terminal Emulator for Android and enable ADB over TCP su setprop service.adb.tcp.port 5555 stop adbd start adbd Then check your IP with ifconfig and on your desktop computer run adb connect dev.ip.add.ress:5555 (dev.ip.add.ress is the IP address of your device – like 192.168.100.46, but yours will be different) Then execute adb reboot update from the desktop computer and the box should reboot then come up in CoreELEC. Going back to android is as easy as disconnecting the boot media and rebooting, the android OS is still on the device. #### Issue: WIFI support. Many of these cheap devices use the cheapest WIFI chips they can and often the MFGs won’t share details with FOSS developers, so driver support is weak. Bummer, but the boxes have wired connections and wired works infinitely better anyway: it isn’t a phone, it’s attached to a TV that’s not moving around, run the damn wire and get a stable, reliable connection. If that’s not possible check the WIFI chips before buying or get a decent USB-WIFI adapter that is supported. Posted at 16:46:48 GMT-0700 Category: HowToLinuxtechnology # WebP and SVG Tuesday, September 1, 2020 Using WebP coded images inside SVG containers works. I haven’t found any automatic way to do it, but it is easy enough manually and results in very efficiently coded images that work well on the internets. The manual process is to Base64 encode the WebP image and then open the .svg file in a text editor and replace the xlink:href="data:image/png;base64, ..." with xlink:href="data:image/webp;base64,..." (“…” means the appropriate data, obviously). Back in about 2010 Google released the spec for WebP, an image compression format that provides a roughly 2-4x coding efficiency over the venerable JPEG (vintage 1974), derived from the VP8 CODEC they bought from ON2. VP8 is a contemporary of and technical equivalent to H.264 and was developed during a rush of innovation to replace the aging MPEG-II standard that included Theora and Dirac. Both VP8 and H.264 CODECs are encumbered by patents, but Google granted an irrevocable license to all patents, making it “open,” while H.264s patents compel licensing from MPEG-LA. One would think this would tend to make VP8 (and the WEBM container) a global standard, but Apple refused to give Google the win and there’s still no native support in Apple products. A small aside on video and still coding techniques. All modern “lossy” (throwing some data away like .mp3, as opposed to “lossless” meaning the original can be reconstructed exactly, as in .flac) CODECs are founded on either Discrete Cosine Transform (DCT) or Wavelet (DWT) encoding of “blocks” of image data. There are far more detailed write ups online that explain the process in detail, but the basic concept is to divide an image into small tiles of data then apply a mathematical function that converts that data into a form which sorts the information from least human-perceptible to most human-perceptible and sets some threshold for throwing away the least important data while leaving the bits that are most important to human perception. Wavelets are promising, but never really took off, as in JPEG2000 and Dirac (which was developed by the BBC). It is a fairly safe bet that any video or still image you see is DCT coded thanks to Nasir Ahmed, T. Natarajan and K. R. Rao. The differences between 1993’s MPEG-1 and 2013’s H.265 are largely around how the data that is perceptually important is encoded in each still (intra-frame coding) and some very important innovations in inter-frame coding that aren’t relevant to still images. It is the application of these clever intra-frame perceptual data compression techniques that is most relevant to the coding efficiency difference between JPEG and WebP. Back to the good stuff… Way back in 2010 Google experimented with the VP8 intra-coding techniques to develop WebP, a still image CODEC that had to have two core features: • better coding efficiency than JPEG, • ability to handle transparency like .png or .tiff. This could be the one standard image coding technique to rule them all – from icons to gigapixel images, all the necessary features and many times better coding efficiency than the rest. Who wouldn’t love that? Apple. Of course it was Apple. Can’t let Google have the win. But, finally, with Safari 14 (June 22, 2020 – a decade late!) iOS users can finally see WebP images and websites don’t need crazy auto-detect 1974 tech substitution tricks. Good job Apple! It may not be a coincidence that Apple has just released their own still image format based on the intra-frame coding magic of H.265, .heif and maybe they thought it might be a good idea to suddenly pretend to be an open player rather than a walled-garden-screw-you lest iOS insta-users wall themselves off from the 90% of the world that isn’t willing to pay double to pose with a fashionable icon in their hands. Not surprisingly, .heic, based on H.265 developments is meaningfully more efficient than WebP based on VP8/H.264 era techniques, but as it took WebP 10 years to become a usable web standard, I wouldn’t count on .heic having universal support soon. Oh well. In the mean time, VP8 gave way to VP9 then to VP10, which has now AV1, arguably a generation ahead of HEVC/H.265. There’s no hardware decode (yet, as of end of 2020) but all the big players are behind it, so I expect 2021 devices will and GPU decode will come in 2021. By then, expect VVC (H.266) to be replacing HEVC (H.265) with a ~35% coding efficiency improvement. Along with AV1’s intra/inter-frame coding advance, the intra-frame techniques are useful for a still format called AVIF, basically AVIF is to AV1 (“VP11”) what WEBP is to VP8 and HEIF is to HEVC. So far (Dec 2020) only Chrome and Opera support AVIF images. Then, of course, there’s JPEG XL on the way. For now, the most broadly supported post-JPEG image codec is WEBP. SVG support in browsers is a much older thing – Apple embraced it early (SVG was not developed by Google so….) and basically everything but IE has full support (IE… the tool you use to download a real browser). So if we have SVG and WebP, why not both together? Oddly I can’t find support for this in any of the tools I use, but as noted at the open, it is pretty easy. The workflow I use is to: • generate a graphic in GIMP or Photoshop or whatever and save as .png or .jpg as appropriate to the image content with little compression (high image quality) • Combine that with graphics in Inkscape. • If the graphics include type, convert the type to SVG paths to avoid font availability problems or having to download a font file before rendering the text or having it render randomly. • Save the file (as .svg, the native format of Inkscape) • Convert the image file to WebP with a reasonable tool like Nomacs or Ifranview. • Base64 encode the image file, either with base64 # base64 infile.webp > outfile.webp.b64 or with this convenient site • If you use the command line option the prefix to the data is “data:image/webp;base64, • Replace the … on the appropriate xlink:href="...." with the new data using a text editor like Atom. • Drop the file on a browser page to see if it works. WordPress blocks .svg uploads without a plugin, so you need one The picture is 101.9kB and tolerates zoom quite well. (give it a try, click and zoom on the image). Posted at 08:54:16 GMT-0700 Category: HowToLinuxphotoself-publishingtechnology # Dealing with Apple Branded HEIF .HEIC files on Linux Saturday, August 22, 2020 Some of the coding tricks in H.265 have been incorporated into MPEG-H coding, an ISO standard introduced in 2017, which yields a roughly 2:1 coding efficiency gain over the venerable JPEG, which was introduced in 1992. Remember that? I do; I’m old. I remember having a hardware NUBUS JPEG decoder card. One of the reasons JPEG has lasted so long is that images have become a small storage burden (compared to 4k video, say) and that changing format standards is extremely annoying to everyone. Apple has elected to make every rational person’s life difficult and put a little barbed wire around their high-fashion walled garden and do something a little special with their brand of a HEVC (h.265) profile for images. Now normally seeing iOS user’s insta images of how fashionable they are isn’t really worth the effort, but now and then a useful correspondent joins the cult and forks over a ton of money to show off a logo and starts sending you stuff in their special proprietary format. Annoying, but fixable. Assuming you’re using an OS that is neither primarily spyware nor fashion forward, such as Linux Mint, you can install HEIF decode (including Apple Brand HEIC) with a few simple commands: $ sudo add-apt-repository ppa:jakar/qt-heif
$sudo apt update$ sudo apt install qt-heif-image-plugin

Once installed, various image viewers should be able to decode the images.  I rather like nomacs as a fairly tolerable replacement for Irfan Skiljan‘s still awesome irfanview.

Posted at 03:56:36 GMT-0700

Category: HowToLinuxphotoPositivereviewstechnology

# Integrate Fail2Ban with pfSense

Monday, July 13, 2020

Fail2Ban is a very nice little log monitoring tool that is used to detect cracking attempts on servers and to extract the malicious IPs and do the things to them–usually temporarily adding the IP address of the source of badness to the server’s firewall “drop” list so that IP’s bad packets are lost in the aether.   This is great, but it’d be cool to, instead of running a firewall on every server each locally detecting and blocking malicious actors, to instead detect across all services and servers on the LAN and push the results up to a central firewall so the bad IPs can’t reach the network at all. This is one method to achieve that goal.

I like pfSense as a firewall and run FreeBSD on my servers; I couldn’t find a prebuilt tool to integrate F2B with pfSense, but it wasn’t hard to hack something together so it worked. Basically I have F2B maintain a local “block list” of bad IPs as a simple text file which is published via Apache from where pfSense’s grabs it and applies it as a LAN-wide IP filter.  I use the pfSense package pfBlockerNG to set up the tables but in the end a custom script running on the pfSense server actually grabs the file and updates the pfSense block lists from it on a 1 minute cron job.

There are plenty of well-written guides for getting F2B working and how to configure it for jails; I found the following useful:

The custom bits I did to get it to work are:

### Custom F2B Action

On the protected side, I modified the “dummy.conf” script to maintain a list of bad IPs in an Apache served location that pfSense could reach.  F2B manages that list, putting bad IPs in “jail” and letting them out as in any normal F2B installation–but instead of being the local server’s packet filter, it is a web-published text list.

# Fail2Ban configuration file
#
# Author: David Gessel
# Based on: dummy.conf by Cyril Jaquier
#

[Definition]

# Option:  actionstart
# Notes.:  command executed on demand at the first ban (or at the start of Fail2Ban if actionstart_on_demand is set to false).
# Values:  CMD
#

actionstart = if [ -z '' ]; then
touch
printf %%b "# \n"
fi
chmod 755
echo "%(debug)s started"

# Option:  actionflush
# Notes.:  command executed once to flush (clear) all IPS, by shutdown (resp. by stop of the jail or this action)
# Values:  CMD
#

actionflush = if [ ! -z '' ]; then
rm -f
touch
printf %%b "# \n"
fi
chmod 755
echo "%(debug)s clear all"

# Option:  actionstop
# Notes.:  command executed at the stop of jail (or at the end of Fail2Ban)
# Values:  CMD
#
actionstop = if [ ! -z '' ]; then
rm -f
touch
printf %%b "# \n"
fi
chmod 755
echo "%(debug)s stopped"

# Option:  actioncheck
# Notes.:  command executed once before each actionban command
# Values:  CMD
#
actioncheck =

# Option:  actionban
# Notes.:  command executed when banning an IP. Take care that the
#          command is executed with Fail2Ban user rights.
# Tags:    See jail.conf(5) man page
# Values:  CMD
#

actionban = printf %%b "\n"
sed -i '' '/^\$/d'
sort -u  -o
chmod 755
echo "%(debug)s banned  (family: )"

# Option:  actionunban
# Notes.:  command executed when unbanning an IP. Take care that the
#          command is executed with Fail2Ban user rights.
# Tags:    See jail.conf(5) man page
# Values:  CMD
#

# flush the IP using grep which is supposed to be about 15x faster than sed
# grep -v "pattern" filename > filename2; mv filename2 filename

actionunban = grep -v ""  >
mv
chmod 755
echo "%(debug)s unbanned  (family: )"

debug = []   --

[Init]

init = BRT-DNSBL

target = /usr/jails/claudel/usr/local/www/data-dist/brt/dnsbl/brtdnsbl.txt
temp = .tmp
to_target = >>


Once this list is working, then move to the pfSense side.

### Set up pfBlockerNG

The basic setup of pfBlockerNG is well described, for example in https://protectli.com/kb/how-to-setup-pfblockerng/ and it provides a lot of useful blocking options, particularly with externally maintained lists of internationally recognized bad actors.  There are two basic functions, related but different:

#### DNSBL

Domain Name Service Block Lists are lists of domains associated with unwanted activity and blocking them at the DNS server level (via Unbound) makes it hard for application level services to reach them.  A great use of DNSBLs is to block all of Microsoft’s telemetry sites, which makes it much harder for Microsoft to steal all your files and data (which they do by default on every “free” Windows 10 install, including actually copying your personal files to their servers without telling you!  Seriously.  That’s pretty much the definition of spyware.)

It also works for non-corporate-sponsored spyware, for example lists of command and control servers found for botnets or ransomware servers.  This can help prevent such attacks by denying trojans and viruses access to their instruction servers.  It can also easily help identify infected computers on the LAN as any blocked requests are logged (to 1.1.1.1 at the moment, which is an unfortunate choice given that is now a well-reputed DNS server like Google’s 8.8.8.8 but, it seems, without all the corporate spying.)  There is a bit of irony in blocking lists of telemetry gathering IPs using lists that are built using telemetry.

Basically DNSBLs prevent services on the LAN from reaching nasty destinations on the internet by returning any DNS request to look up a malicious domain name with a dead-end IP address.  When your windows machine wants to report your web browsing habits to microsoft, it instead gets a “page not found” error.

#### IPBL

This integration concept uses an IPBL, a list of IP addresses to block.  An IPBL works at a lower level than a DNSBL and typically is set up to block traffic in both directions–a script kiddie trying to brute force a password can be blocked from reach the services on the LAN, but so too can the reverse direction be blocked–if a malicious entity trips F2B, not only are they blocked from trying to reach in, so too are any sneaky services on your LAN blocked from reaching out to them on the internet.

All we need to do is get the block list F2B is maintaining into pfSense.  pfBlockerNG can subscribe to the list easily enough, but the minimum update time is an hour, which is an awfully long time to let someone try to guess passwords or flood your servers with 404 requests or whatever else you’re using F2B to detect and stop.  So I wrote a simple script that executes a few simple commands to grab the IP list F2B maintains, clean it, and use it to update the packet filter drop lists:

/root/custom/brtblock.sh

#!/usr/bin/env sh
# set -x # uncomment for "debug"

# Get latest block list
/usr/local/bin/curl -m 15 -s https://server.ip/brtdnsbl.txt > /var/db/pfblockerng/original/BRTDNSBL.orig
# filter for at least semi-valid IPs.
/usr/bin/grep  -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' /var/db/pfblockerng/original/BRTDNSBL.orig > /var/db/pfblockerng/native/BRTDNSBL.txt
# update pf tables
/sbin/pfctl -t pfB_BRTblock -T replace -f /var/db/pfblockerng/native/BRTDNSBL.txt > /dev/null 2>&1


HT to Jared Davenport for helping to debug the weird /env issues that arise when trying to call these commands directly from cron.

#### Preventing Self-Lockouts

One of the behaviors of pfBlockerNG that the dev seems to think is a feature is automatic filter order management.  This overrides manually sorted filter orders and puts pfB’s block filters ahead of all other filters, including, say, allow filters of your own IPs that you don’t want to ever be locked out in case you forget your passwords and accidentally trigger F2B on yourself.  To fix this, you have to use a non-default setting and make all IP block list “action” types “Alias_Native.”

To use Alias_Native lists, you write your own per-alias filter (typically “drop” or “reject”) and then pfBlockerNG won’t auto-order them for you on update.

### Cron Plugin

The last ingredient is to update the list on pfSense quickly.  pfSense is designed to be pretty easy to maintain so it overwrites most of the file structure on upgrade, making command line modifications frustratingly transient.  I understand that /root isn’t flushed on an upgrade so the above script should persist inside the /root directory.  But crontab -e modifications just don’t stick around.  To have cron modifications persist, install the “Cron” package with the pfSense package manager.  Then just set up a cron job to run the script above to keep the block list updated.  “*/1” means run the script once a minute.

### Results

The system seems to be working well enough; the list of miscreants as small, but effectively targeted: 11,840 packets dropped from an average of about 8-10 bad IPs at any given time.

Posted at 05:48:43 GMT-0700

Category: FreeBSDHowToSecuritytechnology

# Save your email! Avoid the Thunderbird 68 update

Thursday, November 28, 2019

### UPDATE:  78 just repeated history with another unwelcome surprise update.

I’ve come to some peace with 68 as most of the really critical plugins were updated.  But 78 is a long way from there and TB devs have continued to create some really bad blood with add-on developers.  I’d argue that the strategy being taken by the devs toward compatibility is defensible, but they seem deaf to the empty wasteland they’ve made of the add-on marketplace.   For me, one of the critical deficiencies is losing the support of the Enigmail developers (curiously, this 2019 post seems to be a bit behind release 2.2.4.1, which apparently adds support (!).)

(This is now a bit obsolete, referencing the last screw-ya’ll update that was pushed out without notice or option, it is clearly too much to hope that TB devs stop being so sure that “if you don’t do email our way, you’re doing it wrong.”)

### TL:DR

If you’ve customized TB with plugins you care about, DO NOT UPDATE to 68 until you verify that every plugin you use is compatible.  TB will NOT check for you and once you launch 68, the plugins that have been updated to 68 compatibility will not work with 60.x, which means you better have a backup of your .thunderbird profile folder or you’re going to be filled with seething rage and you’ll have to undo the update. This misery is the consequence of Mozilla having failed to fully uphold their obligation to the user and developer communities that rely on and have enhanced the tools they control.

BTW: if you’re using Firefox and miss the plugins that made it more than just a crappy clone of Chrome, Waterfox is great and actually respects users and community  developers.  Give it a try.

#### Avoid Thunderbird 68 Hell

To avoid this problem now and in the future, you have to disable automatic updates.  In Thunderbird: Edit->Preferences->Advanced->General-[Config Editor…]->app.update.auto=False, app.update.enabled=False.

On Linux, you should also disable OS Updates using Synaptic: select installed thunderbird 60.x and then from the menu bar Package->Lock Version.

If you’ve been surprise updated to the catastrophically incompatible developer vanity project and massive middle finger to the plugin developer community which is 68 (and 60 to a lesser extent), then you have to revert.  This sucks as 60.x isn’t in the repos.

#### Undo Thunderbird 68 Hell

First, do not run 68.  Ever.  Don’t.  It will cause absolute chaos with your plugins.  First it showed most incompatible, then updated some, then showed others compatible, but had deleted the .xpi files so they weren’t in the .thunderbird folder any more, despite being listed and shown incorrectly as compatible.  This broke some things I could live without like Extra Format Buttons, but others I really needed like Dorando Keyconfig and Sieve.  Mozilla’s attitude appears to be “if you’re using software differently than we think you should, you’re doing it wrong.”

The first step before breaking things even more is to backup your .thunderbird directory.  You can find the location from Help->Troubleshooting Information->Application Basics->Profile Directory.  Just click [Open Directory].  Make a backup copy of this directory before doing anything else if you don’t already have one, in linux a command might be:

tar -jcvf thunderbird_profile_backup.tar.bz2 .thunderbird


If you’re running Windows, old installers of TB are available here.

In Linux, using a terminal, see what versions are available in your distro:

apt-cache show thunderbird

I see only 1:68.2.1+build1-0ubuntu0.18.04.1 and 1:52.7.0+build1-0ubuntu1. Oh well, neither is what I want. While in the terminal uninstall Thunderbird 68

sudo apt-get remove thunderbird

As my distro, Mint 19.2, only has 68.x and 52.x in the apt cache, I searched to find a .deb file of a recent version.  I couldn’t find the last plugin compatible version, 60.9.0 as an easy to install .deb (though it is available for manual install from Ubuntu) so I am running 60.8.0, which works.  One could download the executable file of 60.9.1 .and put it somewhere (/opt, say) and then update start scripts to execute that location.

I found the .deb file of 60.8.0 at this helpful historical repository of Mozilla installers.  Generally the GUI will auto-install on execution of the download.  But don’t launch it until you restore your pre-68 .thunderbird profile directory or it will autocreate profile files that are a huge annoyance.  If you don’t have a pre-68 profile, you will probably have to hunt down pre-68 compatible versions of all of your plugins, though I didn’t note any catastrophic profile incompatibilities (YMMV).

Good luck. Mozilla just stole a day of your life.

Posted at 07:51:32 GMT-0700

Category: HowTotechnology