Positive

On things for which my opinion was net positve.

The end of a comic era

Sunday, May 14, 2023 

Tonight I listened to the last episode of NPRs excellent and hilarious Ask Me Another, though originally broadcast on 2021-09-24, it didn’t reach my ears until tonight thanks to the magic of podcasts. It was genuinely hard to hear them sign off for the last time.  I will really miss this show and the warmth and good spirits of Ophira Eisenberg and Jonathan Coulton.

I’ve been listening to this show since it started, back so far as to have been over syndicated FM broadcast on KQED at home and since on various digital media over the years wherever I’ve been, even here in Iraq.  It suffered when Covid hit, the energy and charm didn’t translate well to zoom and without an audience as so many things didn’t and sadly didn’t live to see Covid restrictions lifted.  It would have been fitting if they’d been able to record their last show at The Bell House one more time.  Maybe someday they can have a reunion show.

US Public Radio has been an anchor of good quality programming, from Car Talk, which I still listen to weekly despite the questions being increasingly out of touch (though the cars have long been fairly irrelevant) and Fresh Air and Terry Gross‘ voice, which came from my mother’s kitchen radio every afternoon from WHYY about as far back as I can remember.

Posted at 17:42:12 GMT-0700

Category: EventsFunnyMediaPositiveReviews

Compile and install Digikam 8.1 on Ubuntu 22.04 (Jammy Jellyfish)

Friday, March 26, 2021 

Digikam is an incredibly powerful media management tool that integrates a great collection of powerful media processing projects into a single, fairly nice and moderately intuitive user interface. The problem is that it make use of SO many projects and libraries that installation is quite fragile and most distributions are many years out of date – that is a typical sudo apt install digikam will yield version 4.5 while release is (as of this writing) 8.1.

In particular, this newer version has face detection that runs LOCALLY – not on Google or Facebook’s servers – meaning you don’t have to trade your personal photos and all the data implicit in them to a data broker to make use of such a useful tool.  Sure, Google once bought and then improved Picasa Desktop which gave you this function, but then they realized this was cutting into their data harvesting business and discontinued Picasa and tried to convince people to let them look at all their pictures with Google Photos, which is massively creepy.  We really, really need to make personal data a toxic asset, such an intolerable liability that any company that holds any personal data has negative value.  But until then, use FOSS software on your own hardware where ever possible.

You can compile the latest version on Ubuntu 22.04 (Jammy Jellyfish), though not exactly painlessly, or you can install the flatpak appimage easily. I hate flatpaks with a passion (appimage is much better, it is self-contained, though still breaks the integration value of having a program installed on your computer just because library maintenance is tedious and devs can’t be bothered), so I went through the exercise and found what appears to be stable success with the following procedure which yielded a fully featured digikam with zero dependency errors or warnings and all features enabled using MariaDB as a backend.

Updating Ubuntu from 20.04 to 21.10 (or any other major update too) will (as typical) break a ton of stuff.  For “reasons” the updater uninstalls all sorts of things like MariaDB and many of the dependencies.  Generally, as libraries change versions, recompiling is required.  This is so easy with FreeBSD ports…

Install and configure MariaDB

sudo apt update
sudo apt install mariadb-server
sudo mysql_secure_installation

The secure options are all good, accept them unless you know better.

Start the server (if it isn’t)

sudo systemctl start mariadb.service
sudo systemctl enable mariadb --now
sudo systemctl status mariadb.service

Do some really basic config:

sudo nano /etc/mysql/mariadb.conf.d/50-server.cnf

and set:

character-set-server = utf8mb4
collation-server = utf8mb4_general_ci
default_storage_engine = InnoDB

Switch to mariadb and create an admin user account and (I’d suggest) one for digikam as below.  It seems this has to be done before the first connect and can’t be fixed after.  You’ll probably want to use a different ‘user’ than I did, but feel free.

sudo mariadb
CREATE USER 'gessel'@'localhost' IDENTIFIED BY 'password';
GRANT ALL ON *.* TO 'gessel'@'localhost' IDENTIFIED BY 'password';
CREATE DATABASE digikam;
GRANT ALL PRIVILEGES ON digikam.* TO 'gessel'@'localhost';
FLUSH PRIVILEGES;

should correctly create the correct user – though check the instructions tab on the database connection options pane for any changes if you’re following these instructions for install of a later version. You will need the socket location to connect to the database so before exit; run:

mysqladmin -u admin -p version

Should yield something like:

Enter password: 
mysqladmin  Ver 9.1 Distrib 10.3.25-MariaDB, for debian-linux-gnu on x86_64
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Server version		10.3.25-MariaDB-0ubuntu0.20.04.1
Protocol version	10
Connection		Localhost via UNIX socket
UNIX socket		/var/run/mysqld/mysqld.sock
Uptime:			5 hours 26 min 6 sec

Threads: 29  Questions: 6322899  Slow queries: 0  Opens: 108  Flush tables: 1  Open tables: 74  Queries per second avg: 323.157

And note the value for UNIX socket, you’re going to need that later: /var/run/mysqld/mysqld.sock – yours might vary.

Install digiKam Dependencies

Updates 2021-10-30 🎃

  • Updated to libx264-163 and libx265-199
  • Added libopencv-dev dependency
  • Version change from 7.2.0 to 7.3.0

Updates 2022-02-01 🧧

  • Installing on Ubuntu 21.10 “impish”
  • Version change to 7.5.0 (note camelcase used for file name now, “digiKam” not “digikam“)
  • Problem with libopencv-dev required selecting a # sudo aptitude install solution to get past a libilmbase-dev but it is not installable error.

Updates 2023-09-29 🥮

  • Installing on Ubuntu Ubuntu 22.04 “Jammy Jellyfish”
  • Version change to 8.1.0 (note camelcase used for file name now, “digiKam” not “digikam”)
  • libjasper4 → libjasper7
  • version 8 migrated to QT6
  • libx264-163 → libx264-164
  • Qt x11 extras removed with QT6
  • libqt5xmlpatterns5-dev replaced with Rajce plugin
  • Marble (geolocation) won’t work with QT6 quite yet (as of writing). A patch was pushed 2023-09-24 but hasn’t hit repros.

Updates 2024-04-24 🌺

  • Installing (still) on Ubuntu Ubuntu 22.04 “Jammy Jellyfish”
  • Version change to 8.3.0
  • libqt6networkauth6-dev avail and listed now.
  • bootstrap failed without qtmultimedia5-dev, now listed, but I still get QtMultimedia Support will be compiled.... NO (optional)
  • konadicontact is installed but version 4:22.04.3 and there doesn’t seem to be a PPA for updating, so that might have to wait for 24.04, Noble Numbat, which is expected any day. This might also fix the QtMultimedia issue.  If it doesn’t I’ll file bug reports.

Digikam has just a few dependencies.just a few... the below command should install the needed for 7.30 on Ubuntu 21.10. Any other version combination might be different. Things are a bit screwy between QT5 and QT6, apologies if this is mixed up:

sudo aptitude install \
bison \
checkinstall \
devscripts \
doxygen \
extra-cmake-modules \
ffmpeg \
ffmpegthumbnailer \
flex \
graphviz \
help2man \
jasper \
libavcodec-dev \
libavdevice-dev \
libavfilter-dev \
libavformat-dev \
libavutil-dev \
libboost-dev \
libboost-graph-dev \
libeigen3-dev \
libexiv2-dev \
libgphoto2-dev \
libjasper-dev \
libjasper-runtime \
libjasper7 \
libjpeg-dev \
libkf5akonadicontact-dev \
libkf5calendarcore-dev \
libkf5contacts-dev \
libkf5doctools-dev \
libkf5filemetadata-dev \
libkf5kipi-dev \
libkf5notifications-dev \
libkf5notifyconfig-dev \
libkf5sane-dev \
libkf5solid-dev \
libkf5threadweaver-dev \
libkf5xmlgui-dev \
liblcms2-dev \
liblensfun-dev \
liblqr-1-0-dev \
libmagick++-6.q16-dev \
libmagick++-6.q16hdri-dev \
libmagickcore-dev \
libmarble-dev \
libqt5xmlpatterns5-dev \
libqt6core5compat6-dev \
libqt6opengl6-dev \
libqt6openglwidgets6 \
libqt6sql6-mysql \
libqt6svg6-dev \
libqt6networkauth6-dev \
qt6-webengine-dev \
libqt6webview6 \
qt6-webview-dev \
libqtav-dev \
libqtwebkit-dev \
libswscale-dev \
libtiff-dev \
libusb-1.0-0-dev \
libx264-164 \
libx264-dev \
libx265-199 \
libx265-dev \
libxml2-dev \
libxslt1-dev \
marble \
pkg-kde-tools \
qt6-base-dev \
qt6-base-dev-tools \
qt6-multimedia-dev \
qtmultimedia5-dev \
qt6-webengine-dev \
libopencv-dev \
qt6-webengine-dev-tools

Compile Digikam

Switch to your projects directory (~/projects, say) and get the source, cross your fingers, and go to town. The make -j4 command will take a while to compile everything.  There are two basic mechanisms for getting the source code: wget the taball or git pull the repository.

Download the tarball

Check the latest version at https://download.kde.org/stable/digikam/ It was 7.3.0, but is now 8.1.0 and will, certainly change again. This is currently a 255.3 MB download (!). Note the csclub mirror below has 8.0.0.

wget https://mirror.csclub.uwaterloo.ca/kde/Attic/digikam/8.0.0/digiKam-8.0.0.tar.xz
tar -xvf digiKam-8.0.0.tar.xz
cd digiKam-0.0.0.tar.xz

git pull the repository

Git uses branches/tags so check the pull down list of latest branches and tags at the top left, below the many, many branches is the tag list at https://invent.kde.org/graphics/digikam/-/tree/v8.3.0 , latest on top, and currently 8.3.0. This is currently a 1.56 GB git pull (!!).
There was an issue in the v7.3.0 tag that caused built to fail that was fixed in current, so building “stable” isn’t always the best choice for stability. If you’re not upgrading, skip the delete directory command.

sudo rm -r digikam
git clone -b v8.3.0 https://invent.kde.org/graphics/digikam
cd digikam

Then follow the same steps whether gited or wgeted:

./bootstrap.linux
cd build
make -j4
sudo su
make install/fast

Compiling might take 15-30 minutes depending on CPU.  Adjust -jx to optimize build times, the normal rule of thumb is that x=# of cores or cores+1, YMMV, 4 is a reasonable number if you aren’t confident or interested in experimenting. 8.3 also downloads the trained data sets on launch, which is kinda interesting, they are also a little chonky bitwise.

The ./bootstrap.linux result should be as below; if it indicates a something is missing then double check dependencies.  If you’ve never compiled anything before, you might need to install cmake and and some other basics not in the apt install list above:

-- ----------------------------------------------------------------------------------
-- digiKam 8.3.0 dependencies results <https://www.digikam.org>
-- 
-- MySQL Database Support will be compiled.. YES (optional)
-- MySQL Internal Support will be compiled.. YES (optional)
-- Showfoto Support will be compiled........ YES (optional)
-- DBUS Support will be compiled............ YES (optional)
-- App. Style Support will be compiled...... YES (optional)
-- QWebEngine Support will be compiled...... YES (optional)
-- Geolocation Support will be compiled..... YES (optional)
-- Media Player Support will be compiled.... YES (optional)
-- QtMultimedia Support will be compiled.... NO (optional)
-- libboostgraph found...................... YES
-- libexiv2 found........................... YES
-- libexpat found........................... YES
-- libjpeg found............................ YES
-- libkde found............................. YES
-- liblcms found............................ YES
-- libopencv found.......................... YES
-- libpng found............................. YES
-- libpthread found......................... YES
-- libqt found.............................. YES
-- libtiff found............................ YES
-- bison found.............................. YES (optional)
-- doxygen found............................ YES (optional)
-- ccache found............................. YES (optional)
-- flex found............................... YES (optional)
-- libakonadicontact found.................. NO (optional)
-- digiKam will be compiled without KDE desktop address book support.
-- Please install the libakonadicontact (version >= 5.19.0) development package.
-- 
-- libimagemagick found..................... YES (optional)
-- libeigen3 found.......................... YES (optional)
-- libgphoto2 found......................... YES (optional)
-- libjasper found.......................... YES (optional)
-- libkcalendarcore found................... YES (optional)
-- libkfilemetadata found................... YES (optional)
-- libkiconthemes found..................... YES (optional)
-- libkio found............................. YES (optional)
-- libknotifications found.................. YES (optional)
-- libknotifyconfig found................... YES (optional)
-- libsonnet found.......................... YES (optional)
-- libksane found........................... YES (optional)
-- liblensfun found......................... YES (optional)
-- libglib2 found........................... YES (optional)
-- libthreadweaver found.................... YES (optional)
-- libxml2 found............................ YES (optional)
-- libxslt found............................ YES (optional)
-- libheif found............................ YES (optional)
-- libx265 found............................ YES (optional)
-- OpenGL found............................. YES (optional)
-- libqtxmlpatterns found................... YES (optional)
-- digiKam can be compiled.................. YES
-- ----------------------------------------------------------------------------------

Launch and configure Digikam

(if you’re still root, exit root before launching # digikam)

The Configuration options are pretty basic, but note that to configure the Digikam back end you’ll need to use that MariaDB socket value you got before and the user you created like so UNIX_SOCKET=/var/run/mysqld/mysqld.sock:

 

On the first run, it will download about 350mb of code for the face recognition engine.  Hey – maybe a bit heavy, but you’re not giving the Google or Apple free lookie looks at all your personal pictures.  Also, if all this is a bit much (and, Frankly, it is) I’d consider Digikam one of the few applications that makes the whole flatpak thing seem somewhat justified.  Maybe.

Some advice on tuning:

I recommend mysqltuner highly, then maybe check this out (or just leave it default, default works well).

Tuning a database is application and computer specific, there’s no one size fits any, certainly not all, and it may change as your database grows. There are far more expert and complete tuning guides available, but here’s what I do:

Pre-Tuning Data Collection

Tuning at the most basic involves instrumenting the database to log problems, running it for a while, then parsing the performance logs for useful hints. The mysqltuner.pl script is far more expert at than I’ll ever be, so I pretty much just trust it. You have to modify your mysqld.cnf file to enable performance data collection (which, BTW, slows down operation, so undo this later) which, for MariaDB, means adding a few lines:

sudo nano /etc/mysql/mariadb.conf.d/50-server.cnf
# enable performance schema to allow optimization, but ironically hit performance, so disable after tuning.
# in the [mysqld] section insert
performance_schema=ON
performance-schema-instrument='stage/%=ON'
performance-schema-consumer-events-stages-current=ON
performance-schema-consumer-events-stages-history=ON
performance-schema-consumer-events-stages-history-long=ON

Follow the instructions for installing mysqltuner.pl at https://github.com/major/MySQLTuner-perl#downloadinstallation

I rather like this guide’s helpful instructions for putting the script in /usr/local/sbin/ so it is in the execution path:

sudo wget https://raw.githubusercontent.com/major/MySQLTuner-perl/master/mysqltuner.pl -O /usr/local/sbin/mysqltuner.pl
sudo chmod 700 /usr/local/sbin/mysqltuner.pl
sudo mysqltuner.pl

Then restart with sudo service mariadb restart then go about your business with digikam – make sure you rack up some real hours to gather useful data on your performance. Things like ingesting a large collection should generate useful data. I’d suggest doing disk tuning first because that’s hardware not load dependent.

Disk tuning

Databases tend to hammer storage and SSDs, especially SLC/enterprise SSDs, massively improve DB performance over spinning disks – unless you have a massive array of really good rotating drives. I’m running this DB on one spinning disk, so performance is very MEH. MySQL and MariaDB make some assumptions about disk performance which is used to scale some pretty important parameters for write caching. You can meaningfully improve on the defaults by testing your disk with a great linux utility called “fio”.

sudo apt install fio
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75

This will take a while and will give some very detailed information about the performance of your disk subsystem, the key parameters being average and max write IOPS. I typically create a # performance tuning section at the end of my [mysqld] section and before [embedded] and I’ll put these values in as, say: (your IOPS values will be different):

# performance tuning

innodb_io_capacity              = 170
innodb_io_capacity_max          = 286

and sudo service mariadb restart

Using mysqltuner.pl

After you’ve collected some data, there may be a list of tuning options.

sudo nano /etc/mysql/mariadb.conf.d/50-server.cnf

Mine currently look like this, but they’ll change as the database stabilizes and my usage patterns change.

# performance tuning

innodb_io_capacity              = 170
innodb_io_capacity_max          = 286

innodb_stats_on_metadata        = 0
innodb_buffer_pool_size         = 4G
innodb_log_file_size            = 512M
innodb_buffer_pool_instances    = 4
skip_name_resolve               = 1
query_cache_size                = 0
query_cache_type                = 0
query_cache_limit               = 2M
max_connections                 = 175
join_buffer_size                = 4M
tmp_table_size                  = 24M
max_heap_table_size             = 24M
innodb_buffer_pool_size         = 4G
max_allowed_packet              = 128M

and

sudo service mariadb restart

Note max_allowed_packet = 128M comes from this guide. I trust it, but it isn’t a mysqltuner suggestion.

Posted at 17:11:21 GMT-0700

Category: HowToLinuxphotoPositiveReviewsTechnology

Tagging MP3 Files with Puddletag on Linux Mint

Tuesday, March 23, 2021 

A “fun” part of organizing an MP3 collection is harmonizing the tags so the datas work consistently with whatever management schema you prefer.  My preference is management by the file system—genre/artist/year/album/tracks works for me—but consistent metainformation is required and often disharmonious.  Finding metaharmony is a chore I find less taxing with a well structured tag editor and to my mind the ur-meta-tag manager is  MP3TAG.

The problem is that only works with that dead-end spyware riddled failing legacyware called “Windows.” Fortunately, in Linux-land we have puddletag, a very solid clone of MP3TAG.  The issues is that the version in repositories is (as of this writing) 1.20 and I couldn’t find a PPA for the latest, 2.0.1.  But compiling from source is super easy and works in both Linux Mint 19 and Ubuntu 20.04 and version  2.20 on 22.04 which contains my mods to latinization of foreign scripts (yay open source!):

  1. Install pre-reqs to build (don’t worry, if they’re installed, they won’t be double installed)
  2. get the tarball of the source code
  3. expand it (into a reasonable directory, like ~/projects)
  4. switch into that directory
  5. run the python executable “puddletag” directly to verify it is working
  6. install it
  7. tell the desktop manager it’s there – and it should be in your window manager along with the rest of your applications.

The latest version as of this post was 2.0.1 from https://github.com/puddletag/puddletag

sudo apt install python3-pyqt5 python3-pyqt5.qtsvg python3-pyparsing python3-mutagen python3-acoustid libchromaprint-dev libchromaprint-tools libchromaprint1 
wget href="https://github.com/puddletag/puddletag/releases/download/2.0.1/puddletag-2.0.1.tar.gz 
tar -xvf puddletag-2.0.1.tar.gz
cd puddletag-2.0.1/ 
cd puddletag 
./puddletag 
sudo python3 setup.py install 
sudo desktop-file-install puddletag.desktop

A nice feature is the configuration directory is portable and takes your complete customization with you – it is an extremely customizable program so you can generally configure it as fits your mental model.  Just copy the entire puddletag directory located at ~/.configure/puddletag.

Posted at 15:19:01 GMT-0700

Category: AudioHowToLinuxPositiveReviews

EZ rsync cheat sheet

Wednesday, December 30, 2020 

Rsync is a great tool – incredibly powerful for synchronizing directories, copying over a network or over SSH, an awesome way to backup a mobile device back to a core network securely and other great functions.  it works better than just about anything else developed before or since, but is a command line UI that is easy to forget if you don’t use it for a while and Windows is a challenge.

This isn’t meant to be a comprehensive guide, they’re are lots of those, but a quick summary of what I find useful.

There’s one confusing thing that I have to check often to be sure it is going to do what I think it should – the trailing slash on the source.  It works like this:

A quick summary of useful command options (there are many, many) is:

-v, --verbose               increase verbosity
-r, --recursive             recursive (go into subdirectories)
-c, --checksum              skip based on checksum, not mod-time & size (slow, but accurate)
-a, --archive               archive mode; equals -rlptgoD (no -H,-A,-X) (weird with SMB/CIFS)
-z, --compress              compress file data during the transfer, should help over slow links
-n, --dry-run               trial run, don't move anything
-h, --human-readable        display the output numbers in a human-readable format
-u, --update                only copy files that have different sizes and equal or later modification times (-c will enable checksum comparison) 
    --progress              show the sync progress during transfer
    --exclude ".*"          exclude files starting with "."
    --remove-source-files   after synced, empty the dir (like mv/merge)
    --delete                any files in dest that aren't in source are deleted in destination (danger)
    --info=progress2 --info=name0  This yields a pretty usable one line progress meter.

I do not recommend using compression (-z) on a LAN, it’ll probably slow you down. Over a slower (typically) WAN link it usually helps, but YMMV depending on link and CPU speed. Test it with that one line progress meter if it is a long enough sync to matter – it shows transfer rate a little like this:

1,770,984,121   2%  747.54kB/s   27:46:38  xfr#2159, ir-chk=1028/28648)

If the files really have to be accurately transferred, the checksum (-c) option is critical – every copy (or at least “move”) function should include this validation, especially before deleting the original.

Posted at 11:53:28 GMT-0700

Category: FreeBSDLinuxPositiveReviews

Dealing with Apple Branded HEIF .HEIC files on Linux

Saturday, August 22, 2020 

Some of the coding tricks in H.265 have been incorporated into MPEG-H coding, an ISO standard introduced in 2017, which yields a roughly 2:1 coding efficiency gain over the venerable JPEG, which was introduced in 1992.  Remember that?  I do; I’m old.  I remember having a hardware NUBUS JPEG decoder card.   One of the reasons JPEG has lasted so long is that images have become a small storage burden (compared to 4k video, say) and that changing format standards is extremely annoying to everyone.

Apple has elected to make every rational person’s life difficult and put a little barbed wire around their high-fashion walled garden and do something a little special with their brand of a HEVC (h.265) profile for images.  Now normally seeing iOS user’s insta images of how fashionable they are isn’t really worth the effort, but now and then a useful correspondent joins the cult and forks over a ton of money to show off a logo and starts sending you stuff in their special proprietary format.  Annoying, but fixable.

Assuming you’re using an OS that is neither primarily spyware nor fashion forward, such as Linux Mint, you can install HEIF decode (including Apple Brand HEIC) with a few simple commands:

$ sudo add-apt-repository ppa:jakar/qt-heif
$ sudo apt update
$ sudo apt install qt-heif-image-plugin

Once installed, various image viewers should be able to decode the images.  I rather like nomacs as a fairly tolerable replacement for Irfan Skiljan‘s still awesome irfanview.

 

Update: 2022-09-22

Jammy isn’t supported by the jakar PPA, but there are a few other options:

Easy, from Hritik Chaudhary in this post,

sudo apt install heif-gdk-pixbuf

should give gdk access, but not (it seems) qt.  You can use gpicview as an imageviewer with this library.

sudo apt install gpicview

Or build the qt-heic-image-plugin from source:

git clone --depth 1 https://github.com/novomesk/qt-heic-image-plugin
cd qt-heic-image-plugin
mkdir build
cd build
cmake -DCMAKE_BUILD_TYPE=Release ..
make
sudo make install

This machine required ECM from extra cmake modules, which I hadn’t previously installed.

sudo apt install extra-cmake-modules

to successfully cmake.

Update: 2023-01-18

A few updates, as is the way these days with built-from-git software,

cd qt-heic-image-plugin
git pull
sudo rm -r build
mkdir build
cd build
cmake -DCMAKE_BUILD_TYPE=Release ..
make
sudo make install

Get the C API version of libheif instead of C++ and an error handling fix.

Posted at 03:56:36 GMT-0700

Category: CodeHowToLinuxphotoPositiveReviewsTechnology

Green Lacewings

Sunday, January 10, 2016 

I noticed that my avocado tree was developing brown spots on the leaves, which were almost certainly the result of Persea mites.

Leaf Symptoms

 

So I looked up some possible cures, and it seemed like introducing a predator would be the best option and the least hassle.  I’d had good luck with introduced ladybugs a few years back, which formed a stable population that survived for many years after introduction.  For this pest, green lacewings are recommended.  I found a nearby insectary that could provide larvae on cards and they shipped them overnight.

Green Lacewing egg cards

 

 

The little guys look cute just waiting to hatch…

Green Lacewing Eggs

 

I hung he cards on the leaves of the tree after incubating them overnight in a warm room, and they should hatch sometime in the next day or two, as long as the ants don’t find them first…

Card in tree

 


Update 8 Sept 2016:

The green lacewings seem to have eaten all the mites.  It has been 9 months and there aren’t any signs of damage to this spring’s leaves.  Yay!

No mite bites

The new leaves that grew seem to be developing without any bites at all.  The old leaves that were too damaged have fallen off, but the surviving older leaves still show the scars of the mites.  Green lacewings seem to have done the trick.

Posted at 14:40:47 GMT-0700

Category: photoPositiveReviews

Signal Desktop: Probably a good thing

Tuesday, December 8, 2015 

Signal is an easy to use chat tool that competes (effectively) with What’sApp or Viber. They’ve just released a desktop version which is being “preview released/buzz generating released.”  It is developed by a guy with some cred in the open source and crypto movement, Moxie Marlinspike.  I use it, but do not entirely trust it.

I’m not completely on board with Signal.  It is open source, and so in theory we can verify the code.  But there’s some history I find disquieting.  So while I recommend it as the best, easiest to use, (probably) most secure messaging tool available, I do so with some reservations.

  • It originally handled encrypted SMS messages.  There is a long argument about why they broke SMS support on the mailing lists.  I find all of the arguments Whisper Systems made specious and unconvincing and cannot ignore the fact that the SMS tool sent messages through the local carrier (Asiacell, Korek, or Zain here).  Breaking that meant secure messages only go through Whisper Systems’ Google-managed servers where all metadata is captured and accessible to the USG. Since it was open source, that version has been forked and is still developed, I use the SMSSecure fork myself
  • Signal has captured all the USG funding for messaging systems.  Alternatives are not getting funds.  This may make sense from a purely managerial point of view, but also creates a single point of infiltration.  It is far easier to compromise a single project if there aren’t competing projects.   Part of the strength of Open Source is only achieved when competing development teams are trying to one up each other and expose each other’s flaws (FreeBSD and OpenBSD for example).  In a monoculture, the checks and balances are weaker.
  • Signal has grown more intimate with Google over time.  The desktop version sign up uses your “google ID” to get you in the queue.  Google is the largest commercial spy agency in the world, collecting more data on more people than any other organization except probably the NSA.  They’re currently an advertising company and make their money selling your data to advertisers, something they’re quite disingenuous about, but the data trove they’ve built is regularly mined by organizations with more nefarious aims than merely fleecing you.

What to do?  Well, I use signal.  I’m pretty confident the encryption is good, or at least as good as anything else available.  I know my metadata is being collected and shared, but until Jake convinces Moxie to use anonymous identifiers for accounts and message through Tor hidden nodes, you have to be very tech savvy to get around that and there’s no Civil Society grants going to any other messaging services using, for example, an open standard like a Jabber server on a hidden node with OTR.

For now, take a half step up the security ladder and stop using commercial faux security (or unverifiable security, which is the same thing) and give Signal a try.

Maybe at some later date I’ll write up an easy to follow guide on setting up your own jabber server as a tor hidden service and federating it so you can message securely, anonymously, and keep your data (meta and otherwise) on your own hardware in your own house, where it still has at least a little legal protection.

 

Posted at 10:21:22 GMT-0700

Category: PositivePrivacyReviewsSecurityTechnology

Low Voltage LED Lighting

Monday, July 13, 2015 

My kitchen has had halogen lighting for 20 years, from back when it was a slightly more efficient choice than incandescent lighting and had a pleasing, cooler (bluer, meaning the filament runs hotter) color temperature.

LEDs Installed

Progress has moved on and while fluorescent lights still have a lead in maximum luminous efficacy (lm/w), for example the GE Ecolux Watt-Miser puts out 111 lm/W, they’re less versatile than LEDs and installation is a hassle while low voltage LEDs are easy to install and look cool.

System Design

The goal of this project was to add dimmable, pleasing light to the kitchen that I found aesthetically interesting.  I wanted a decent color rendering index (CRI), ease of installation, and at reasonable cost.  I’ve always liked the look of cable lighting and the flexibility of the individual, adjustable luminaires.

I couldn’t find much information on how variable output LEDs work and what can be used to drive them.  I have a pretty good collection of high quality power supplies, which I wanted to take advantage of, but wasn’t sure if I’d be able to effectively dim the bulbs from the documentation I found. So I did some tests.

Test Configuration

I bought a few different 12V, Dimmable LEDs and set up a test configuration to verify operation and output with variable voltage and variable current.  The one bit of data I had was that using standard commercial controllers, the lowest output is typically stated to be around 70% of maximum output: that is the dimming range is pretty limited with standard (PWM/Transformer) controllers.  The results I found were much more encouraging, but revealed some quirks.

I used a laboratory-grade HP power supply with voltage and current control to drive the LEDs, decent multimeters to measure voltage and current, and an inexpensive luminance meter to measure LED output.

I measured 3 different LEDs I selected based on price and expected compatibility with the aesthetics of the project and because they looked like they’d have different internal drivers and covered a range of rated wattage.

Test Results

These bulbs have internal LED controllers that do some sort of current regulation for the diodes that results in a weird voltage/current/output response.  Each bulb has a different turn-on voltage, then responds fairly predictably to increasing input voltage with increasing output, reaches the controller stabilizing voltage and runs very inefficiently until voltage gets over the rated voltage and then becomes increasingly efficient until, presumably, at some point the controller burns out.  I find that the bulbs all run more efficiently at 14V than at the rated 12V.

As a side note, to perform the data analysis, I used the excellent xongrid plugin for excel to perform Kriging interpolation (AKA Gaussian process regression) to fit the data sets to the graphing function’s capabilities.  The graphs are generated with M-Chart and the table with TablePress.

Watts v. Volts

This chart shows the wattage consumed by each of the three LEDs as a function of input voltage, clearly demonstrating both that the power consumption function is non-linear and that power consumption in watts improves when driven over the rated 12V.  Watts are calculated as the product of the measured Volts * Amps.  Because of the current inversion that happens as the controllers come fully on-line, these LEDs can’t be properly controlled near full brightness with a current-controlled power supply, though it works well to provide continuous and fairly linear dimming at low outputs, once the voltage/current function changes slope, the current limiting controller in the power supply freaks out.

Lux v. Volts

This chart shows the lux output by each of the three LEDs as a function of input voltage, revealing the effect of the internal LED driver coming on line and regulating output, which complicates controlling brightness but protects the LEDs.  The 5W LEDs have a fairly gentle response slope and start a very low voltage (2V) so are a good choice for a linear power supply.  The 4W LEDs don’t begin to light up until just over 6V, and so are a good match for low-cost switch mode supplies that don’t go to zero.

Lux/W v. Volts

This chart shows the luminous efficiency (Lux/Watt, Lumen measurement is quite complicated) by each of the three LEDs as a function of input voltage, showing that overdriving the LEDs past the rated 12V can significantly improve efficiency.  There’s some risk it will overheat the controller at some point and result in failure.  I’ll update this post if my system starts to fry LEDs, but my guess is that 14V, which cuts the power load by 20% over 12V operation with the 7.5W lamps I selected, will not significantly impact operational lifetime.

Update: This system has been running for 7 years now.  In that time two linear power supplies have failed (they were fairly inexpensive models as such things go).  The LED modules had a high infant mortality rate: 2-3 failed in the first few months, another one failed just about every 6 months for the first couple of years.  I think it has been 4 years since the last one failed.  This implies that longevity is primarily a function of build quality, which varies.

Total System Efficiency

The emitter efficiency is relatively objective, but the total system efficiency includes the power supply.  I used a Daiwa SS-330W switching power supply I happened to have in stock to drive the system, which cost less than a dimmable transformer and matching controller, and should be significantly higher quality.  The Daiwa doesn’t seem to be easily available any more, but something like this would work well for up to 5A total load and something like this would handle as many as 40 7.5W LEDs on a single control, though the minimum 9V output has to be matched to LEDs to get satisfactory dimming. It is important not to oversize the power supply too much as switch mode supplies are only really efficient as you get close to their rated output.  An oversized switchmode power supply can be extremely inefficient.

With the Daiwa, driving 13 7.5W LEDs, I measured 8.46A at 11.94V output or 101 Watts to brightly illuminate the entire kitchen, and providing far more light than 400W of total halogen lights.  I measured the input into the power supply at 0.940A at 121.3V or 114 Watts.  That means the power supply is 88.6% efficient at 12V, which is more or less as expected for a variable output supply.

Increasing the output voltage to 14.63 Volts lowered the output current to 5.35A or 78 Watts without lowering the brightness at the installation; I measured at 168 lux at both 12.0V at 14.6V. The input current at 14.63V dropped to 0.755A or 91.6 Watts, meaning the power supply is slightly less efficient at lower output currents (as is usually the case).

  • Overdriving the 12V rated LEDs to 14.63V improves plug efficiency by 20%.

At the low end, the SS-330W’s minimum output is 4.88V, which yields 12 lux at the counter or a 14x dimming ratio to 7% of maximum illumination, a far better range than is reported for standard dimmer/transformer combinations.

Parts

Raw Data:

LED_power_graph_data

(MS Excel file, you will need the xongrid plugin to update the data as rendered in the graphs)

Posted at 02:45:36 GMT-0700

Category: FabricationHowTophotoPositiveReviewsTechnology

Futurama is Awesome

Saturday, January 5, 2013 

I learned two things about Futurama recently which added to my already deep appreciation for the show. The first is that the theme song came from a very cool song by Pierre Henry called Psyche Rock from 1967, which is on youtube. It was remixed by Fatboy Slim in an appealing way.

But what was most interesting recently was to see episode 10 of season 6, the Prisoner of Benda, a spoof of the Prisoner of Zelda but including what may be the first tv-episode publication of the proof of a relatively complex mathematical theorem in group theory as a core plot element.

Prisoner_of_Benda_Theorem_on_Chalkboard.png

The problem in the plot is that the Professor’s mind swapping machine creates an immune response which prevents swapping back in one step. So how do you get everyone back to into their original bodies? Well, as Sweet Clyde says, it takes at most two extra players [who haven’t swapped yet]. As the entire cast, including the robo-bucket, have swapped bodies, the situation is pretty complex, but fortunately one of the show’s writers, Ken Keeler, has a PhD in applied mathematics from Harvard and found a proof, which is actually shown in the show (above), and then worked in a fast montage that restores everyone.

In the following table, the heading shows the character name of the body, row 0 shows the occupant of that body by the end of the plot’s permutations and before the globetrotters start the transformations. Rows 1-7 show the steps to restore everyone to their original bodies.  Each transformation was animated as a pair using the two “extra players” except the last rotation to restore Sweet Clyde and the Bucket.

Posted at 16:34:18 GMT-0700

Category: FilmsPositiveReviewsTechnology

Forbidden Fruit

Wednesday, August 22, 2012 

om nom nom

Forbidden_fruit.jpg
Posted at 15:38:58 GMT-0700

Category: GeopostPlacesPoliticsPositiveReviews