Categories describing the post, video, pictures, audio…
Digikam is an incredibly powerful media management tool that integrates a great collection of powerful media processing projects into a single, fairly nice and moderately intuitive user interface. The problem is that it make use of SO many projects and libraries that installation is quite fragile and most distributions are many years out of date – that is a typical
sudo apt install digikam will yield version 4.5 while release is (as of this writing) 7.2.
In particular, this newer version has face detection that runs LOCALLY – not on Google or Facebook’s servers – meaning you don’t have to trade your personal photos and all the data implicit in them to a data broker to make use of such a useful tool. Sure, Google once bought and then improved Picasa Desktop which gave you this function, but then they realized this was cutting into their data harvesting business and discontinued Picasa and tried to convince people to let them look at all their pictures with Google Photos. We really, really need to make personal data a toxic asset, such an intolerable liability that any company that holds any personal data has negative value. But until then, use FOSS software on your own hardware where ever possible.
You can compile the latest version on Ubuntu 20.04 Focal Fossa, though not exactly painlessly, or you can install the flatpak easily. I hate flatpaks with a passion, so I went through the exercise and found what appears to be stable success with the following procedure which yielded a fully featured digikam with zero dependency errors or warnings and all features enabled using MariaDB as a backend.
Install and configure MariaDB
sudo apt update sudo apt install mariadb-server sudo mysql_secure_installation
The secure options are all good, accept them unless you know better.
Start the server (if it isn’t)
sudo systemctl start mariadb.service sudo systemctl enable mariadb --now sudo systemctl status mariadb.service
Do some really basic config:
sudo nano /etc/mysql/mariadb.conf.d/50-server.cnf
character-set-server = utf8mb4 collation-server = utf8mb4_general_ci default_storage_engine = InnoDB
Switch to mariadb and create an admin user account and (I’d suggest) one for digikam. It seems this has to be done before the first connect and can’t be fixed after. You’ll probably want to use a different ‘user’ than I did, but feel free.
sudo mariadb CREATE USER 'gessel'@'localhost' IDENTIFIED BY 'password'; GRANT ALL ON *.* TO 'gessel'@'localhost' IDENTIFIED BY 'password'; CREATE DATABASE digikam; GRANT ALL PRIVILEGES ON digikam.* TO 'gessel'@'localhost'; FLUSH PRIVILEGES;
should correctly create the correct user – though check the instructions tab on the database connection options pane for any changes if you’re following these instructions for install of a later version. You will need the socket location to connect to the database so before
mysqladmin -u admin -p version
Should yield something like:
Enter password: mysqladmin Ver 9.1 Distrib 10.3.25-MariaDB, for debian-linux-gnu on x86_64 Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. Server version 10.3.25-MariaDB-0ubuntu0.20.04.1 Protocol version 10 Connection Localhost via UNIX socket UNIX socket /var/run/mysqld/mysqld.sock Uptime: 5 hours 26 min 6 sec Threads: 29 Questions: 6322899 Slow queries: 0 Opens: 108 Flush tables: 1 Open tables: 74 Queries per second avg: 323.157
And note the value for
UNIX socket, you’re going to need that later:
/var/run/mysqld/mysqld.sock – yours might vary.
Install digiKam Dependencies
Digikam has just a few dependencies.… just a few... the below command should install the needed for 7.20 on Ubuntu 20.04. Any other version combination might be different.:
sudo aptitude install \ bison \ checkinstall \ devscripts \ doxygen \ extra-cmake-modules \ ffmpeg \ ffmpegthumbnailer \ flex \ graphviz \ help2man \ jasper \ libavcodec-dev \ libavdevice-dev \ libavfilter-dev \ libavformat-dev \ libavutil-dev \ libboost-dev \ libboost-graph-dev \ libeigen3-dev \ libexiv2-dev \ libgphoto2-dev \ libjasper-dev \ libjasper-runtime \ libjasper4 \ libjpeg-dev \ libkf5akonadicontact-dev \ libkf5calendarcore-dev \ libkf5contacts-dev \ libkf5doctools-dev \ libkf5filemetadata-dev \ libkf5kipi-dev \ libkf5notifications-dev \ libkf5notifyconfig-dev \ libkf5sane-dev \ libkf5solid-dev \ libkf5threadweaver-dev \ libkf5xmlgui-dev \ liblcms2-dev \ liblensfun-dev \ liblqr-1-0-dev \ libmagick++-6.q16-dev \ libmagick++-6.q16hdri-dev \ libmagickcore-dev \ libmarble-dev \ libqt5opengl5-dev \ libqt5sql5-mysql \ libqt5svg5-dev \ libqt5webkit5-dev \ libqt5webview5 \ libqt5webview5-dev \ libqt5x11extras5-dev \ libqt5xmlpatterns5-dev \ libqtav-dev \ libqtwebkit-dev \ libswscale-dev \ libtiff-dev \ libusb-1.0-0-dev \ libx264-160 \ libx264-dev \ libx265-192 \ libx265-dev \ libxml2-dev \ libxslt1-dev \ marble \ pkg-kde-tools \ qtbase5-dev \ qtbase5-dev-tools \ qtmultimedia5-dev \ qtwebengine5-dev \ qtwebengine5-dev-tools
Switch to your projects directory (
~/projects, say) and get the source tarball and cross your fingers and go to town. The
make -j4 command will take a while to compile everything.
wget https://download.kde.org/stable/digikam/7.2.0/digikam-7.2.0.tar.xz tar -xvf digikam-7.2.0.tar.xz cd digikam-7.2.0.tar.xz mkdir build ./bootstrap.linux cd build make -j4 su make install/fast
the ./bootstrap.linux result should be as below – if it indicates a something is missing, then double check dependencies. If you’ve never compiled anything before, you might need to install cmake and and some other basics:
-- ---------------------------------------------------------------------------------- -- digiKam 7.2.0 dependencies results <https://www.digikam.org> -- -- MySQL Database Support will be compiled.. YES (optional) -- MySQL Internal Support will be compiled.. YES (optional) -- DBUS Support will be compiled............ YES (optional) -- App. Style Support will be compiled...... YES (optional) -- QWebEngine Support will be compiled...... YES (optional) -- libboostgraph found...................... YES -- libexiv2 found........................... YES -- libexpat found........................... YES -- libjpeg found............................ YES -- libkde found............................. YES -- liblcms found............................ YES -- libopencv found.......................... YES -- libpng found............................. YES -- libpthread found......................... YES -- libqt found.............................. YES -- libtiff found............................ YES -- bison found.............................. YES (optional) -- doxygen found............................ YES (optional) -- ccache found............................. YES (optional) -- flex found............................... YES (optional) -- libakonadicontact found.................. YES (optional) -- libmagick++ found........................ YES (optional) -- libeigen3 found.......................... YES (optional) -- libgphoto2 found......................... YES (optional) -- libjasper found.......................... YES (optional) -- libkcalendarcore found................... YES (optional) -- libkfilemetadata found................... YES (optional) -- libkiconthemes found..................... YES (optional) -- libkio found............................. YES (optional) -- libknotifications found.................. YES (optional) -- libknotifyconfig found................... YES (optional) -- libksane found........................... YES (optional) -- liblensfun found......................... YES (optional) -- liblqr-1 found........................... YES (optional) -- libmarble found.......................... YES (optional) -- libqtav found............................ YES (optional) -- libthreadweaver found.................... YES (optional) -- libxml2 found............................ YES (optional) -- libxslt found............................ YES (optional) -- libx265 found............................ YES (optional) -- OpenGL found............................. YES (optional) -- libqtxmlpatterns found................... YES (optional) -- digiKam can be compiled.................. YES -- ----------------------------------------------------------------------------------
Launch and configure Digikam
The Configuration options are pretty basic, but note that to configure the Digikam back end you’ll need to use that MariaDB socket value you got before and the user you created like so
On the first run, it will download about 350mb of code for the face recognition engine. Hey – maybe a bit heavy, but you’re not giving the Google or Apple free lookie looks at all your personal pictures. Also, if all this is a bit much (and, Frankly, it is) I’d consider Digikam one of the few applications that makes the whole flatpak thing seem somewhat justified. Maybe.
Some advice on tuning:
Tuning a database is application and computer specific, there’s no one size fits any, certainly not all, and it may change as your database grows. There are far more expert and complete tuning guides available, but here’s what I do:
Pre-Tuning Data Collection
Tuning at the most basic involves instrumenting the database to log problems, running it for a while, then parsing the performance logs for useful hints. The mysqltuner.pl script is far more expert at than I’ll ever be, so I pretty much just trust it. You have to modify your mysqld.cnf file to enable performance data collection (which, BTW, slows down operation, so undo this later) which for MariaDB means adding a few lines:
sudo nano /etc/mysql/mariadb.conf.d/50-server.cnf # enable performance schema to allow optimization, but ironically hit performance, so disable after tuning. # in the [mysqld] section insert performance_schema=ON performance-schema-instrument='stage/%=ON' performance-schema-consumer-events-stages-current=ON performance-schema-consumer-events-stages-history=ON performance-schema-consumer-events-stages-history-long=ON
Follow the instructions for installing mysqltuner.pl at https://github.com/major/MySQLTuner-perl#downloadinstallation
I rather like this guide’s helpful instructions for putting the script in /usr/local/sbin/ so it is in the execution path:
sudo wget https://raw.githubusercontent.com/major/MySQLTuner-perl/master/mysqltuner.pl -O /usr/local/sbin/mysqltuner.pl sudo chmod 700 /usr/local/sbin/mysqltuner.pl sudo mysqltuner.pl
Then restart with
sudo service mariadb restart then go about your business with digikam – make sure you rack up some real hours to gather useful data on your performance. Things like ingesting a large collection should generate useful data. I’d suggest doing disk tuning first because that’s hardware not load dependent.
Databases tend to hammer storage and SSDs, especially SLC/enterprise SSDs, massively improve DB performance over spinning disks – unless you have a massive array of really good ones. I’m running this DB on one spinning disk, so performance is very MEH. MySQL and MariaDB make some assumptions about disk performance which is used to scale some pretty important parameters for write caching. You can meaningfully improve on the defaults by testing your disk with a great linux utility called “fio”.
sudo apt install fio fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
This will take a while and will give some very detailed information about the performance of your disk subsystem, the key parameters being average and max write IOPS. I typically create a
# performance tuning section at the end of my
[mysqld] section and before
[embedded] and I’ll put these values in as, say: (your IOPS values will be different):
# performance tuning innodb_io_capacity = 170 innodb_io_capacity_max = 286
sudo service mariadb restart
After you’ve collected some data, there may be a list of tuning options.
sudo nano /etc/mysql/mariadb.conf.d/50-server.cnf
Mine currently look like this, but they’ll change as the database stabilizes and my usage patterns change.
# performance tuning innodb_io_capacity = 170 innodb_io_capacity_max = 286 innodb_stats_on_metadata = 0 innodb_buffer_pool_size = 4G innodb_log_file_size = 512M innodb_buffer_pool_instances = 4 skip_name_resolve = 1 query_cache_size = 0 query_cache_type = 0 query_cache_limit = 2M max_connections = 175 join_buffer_size = 4M tmp_table_size = 24M max_heap_table_size = 24M innodb_buffer_pool_size = 4G max_allowed_packet = 128M
sudo service mariadb restart
max_allowed_packet = 128M comes from this guide. I trust it, but it isn’t a mysqltuner suggestion.
A “fun” part of organizing an MP3 collection is harmonizing the tags so the datas work consistently with whatever management schema you prefer. My preference is management by the file system—genre/artist/year/album/tracks works for me—but consistent metainformation is required and often disharmonious. Finding metaharmony is a chore I find less taxing with a well structured tag editor and to my mind the ur-meta-tag manager is MP3TAG.
The problem is that only works with that dead-end spyware riddled failing legacyware called “Windows.” Fortunately, in Linux-land we have puddletag, a very solid clone of MP3TAG. The issues is that the version in repositories is (as of this writing) 1.20 and I couldn’t find a PPA for the latest, 2.0.1. But compiling from source is super easy and works in both Linux Mint 19 and Ubuntu 20.04 (yay open source!):
- Install pre-reqs to build (don’t worry, if they’re installed, they won’t be double installed)
- get the tarball of the source code
- expand it (into a reasonable directory, like ~/projects)
- switch into that directory
- run the python executable “puddletag” directly to verify it is working
- install it
- tell the desktop manager it’s there – and it should be in your window manager along with the rest of your applications.
The latest version as of this post was 2.0.1 from https://github.com/puddletag/puddletag
sudo apt install python3-pyqt5 python3-pyqt5.qtsvg python3-pyparsing python3-mutagen python3-acoustid libchromaprint-dev libchromaprint-tools libchromaprint1 wget href="https://github.com/puddletag/puddletag/releases/download/2.0.1/puddletag-2.0.1.tar.gz tar -xvf puddletag-2.0.1.tar.gz cd puddletag-2.0.1/ cd puddletag ./puddletag sudo python3 setup.py install sudo desktop-file-install puddletag.desktop
A nice feature is the configuration directory is portable and takes your complete customization with you – it is an extremely customizable program so you can generally configure it as fits your mental model. Just copy the entire puddletag directory located at
Using WebP coded images inside SVG containers works. I haven’t found any automatic way to do it, but it is easy enough manually and results in very efficiently coded images that work well on the internets. The manual process is to Base64 encode the WebP image and then open the .svg file in a text editor and replace the
(“…” means the appropriate data, obviously).
Back in about 2010 Google released the spec for WebP, an image compression format that provides a roughly 2-4x coding efficiency over the venerable JPEG (vintage 1974), derived from the VP8 CODEC they bought from ON2. VP8 is a contemporary of and technical equivalent to H.264 and was developed during a rush of innovation to replace the aging MPEG-II standard that included Theora and Dirac. Both VP8 and H.264 CODECs are encumbered by patents, but Google granted an irrevocable license to all patents, making it “open,” while H.264s patents compel licensing from MPEG-LA. One would think this would tend to make VP8 (and the WEBM container) a global standard, but Apple refused to give Google the win and there’s still no native support in Apple products.
A small aside on video and still coding techniques.
All modern “lossy” (throwing some data away like .mp3, as opposed to “lossless” meaning the original can be reconstructed exactly, as in .flac) CODECs are founded on either Discrete Cosine Transform (DCT) or Wavelet (DWT) encoding of “blocks” of image data. There are far more detailed write ups online that explain the process in detail, but the basic concept is to divide an image into small tiles of data then apply a mathematical function that converts that data into a form which sorts the information from least human-perceptible to most human-perceptible and sets some threshold for throwing away the least important data while leaving the bits that are most important to human perception.
Wavelets are promising, but never really took off, as in JPEG2000 and Dirac (which was developed by the BBC). It is a fairly safe bet that any video or still image you see is DCT coded thanks to Nasir Ahmed, T. Natarajan and K. R. Rao. The differences between 1993’s MPEG-1 and 2013’s H.265 are largely around how the data that is perceptually important is encoded in each still (intra-frame coding) and some very important innovations in inter-frame coding that aren’t relevant to still images.
It is the application of these clever intra-frame perceptual data compression techniques that is most relevant to the coding efficiency difference between JPEG and WebP.
Back to the good stuff…
Way back in 2010 Google experimented with the VP8 intra-coding techniques to develop WebP, a still image CODEC that had to have two core features:
- better coding efficiency than JPEG,
- ability to handle transparency like .png or .tiff.
This could be the one standard image coding technique to rule them all – from icons to gigapixel images, all the necessary features and many times better coding efficiency than the rest. Who wouldn’t love that?
Of course it was Apple. Can’t let Google have the win. But, finally, with Safari 14 (June 22, 2020 – a decade late!) iOS users can finally see WebP images and websites don’t need crazy auto-detect 1974 tech substitution tricks. Good job Apple!
It may not be a coincidence that Apple has just released their own still image format based on the intra-frame coding magic of H.265, .heif and maybe they thought it might be a good idea to suddenly pretend to be an open player rather than a walled-garden-screw-you lest iOS insta-users wall themselves off from the 90% of the world that isn’t willing to pay double to pose with a fashionable icon in their hands. Not surprisingly, .heic, based on H.265 developments is meaningfully more efficient than WebP based on VP8/H.264 era techniques, but as it took WebP 10 years to become a usable web standard, I wouldn’t count on .heic having universal support soon.
Oh well. In the mean time, VP8 gave way to VP9 then to VP10, which has now AV1, arguably a generation ahead of HEVC/H.265. There’s no hardware decode (yet, as of end of 2020) but all the big players are behind it, so I expect 2021 devices will and GPU decode will come in 2021. By then, expect VVC (H.266) to be replacing HEVC (H.265) with a ~35% coding efficiency improvement.
Along with AV1’s intra/inter-frame coding advance, the intra-frame techniques are useful for a still format called AVIF, basically AVIF is to AV1 (“VP11”) what WEBP is to VP8 and HEIF is to HEVC. So far (Dec 2020) only Chrome and Opera support AVIF images.
Then, of course, there’s JPEG XL on the way. For now, the most broadly supported post-JPEG image codec is WEBP.
SVG support in browsers is a much older thing – Apple embraced it early (SVG was not developed by Google so….) and basically everything but IE has full support (IE… the tool you use to download a real browser). So if we have SVG and WebP, why not both together?
Oddly I can’t find support for this in any of the tools I use, but as noted at the open, it is pretty easy. The workflow I use is to:
- generate a graphic in GIMP or Photoshop or whatever and save as .png or .jpg as appropriate to the image content with little compression (high image quality)
- Combine that with graphics in Inkscape.
- If the graphics include type, convert the type to SVG paths to avoid font availability problems or having to download a font file before rendering the text or having it render randomly.
- Save the file (as .svg, the native format of Inkscape)
- Convert the image file to WebP with a reasonable tool like Nomacs or Ifranview.
- Base64 encode the image file, either with base64
# base64 infile.webp > outfile.webp.b64or with this convenient site
- If you use the command line option the prefix to the data is “
- Replace the … on the appropriate
xlink:href="...."with the new data using a text editor like Atom.
- Drop the file on a browser page to see if it works.
The picture is 101.9kB and tolerates zoom quite well. (give it a try, click and zoom on the image).
Some of the coding tricks in H.265 have been incorporated into MPEG-H coding, an ISO standard introduced in 2017, which yields a roughly 2:1 coding efficiency gain over the venerable JPEG, which was introduced in 1992. Remember that? I do; I’m old. I remember having a hardware NUBUS JPEG decoder card. One of the reasons JPEG has lasted so long is that images have become a small storage burden (compared to 4k video, say) and that changing format standards is extremely annoying to everyone.
Apple has elected to make every rational person’s life difficult and put a little barbed wire around their high-fashion walled garden and do something a little special with their brand of a HEVC (h.265) profile for images. Now normally seeing iOS user’s insta images of how fashionable they are isn’t really worth the effort, but now and then a useful correspondent joins the cult and forks over a ton of money to show off a logo and starts sending you stuff in their special proprietary format. Annoying, but fixable.
$ sudo add-apt-repository ppa:jakar/qt-heif $ sudo apt update $ sudo apt install qt-heif-image-plugin
When I was a young child, my dad bought a brand new 1976 GMC Suburban. Yellow. No extras at all – no head liner, plastic seats, manual everything, 305 V8.
It became my car in high school, survived that. Came out to California with me; ended up in the service of SRL, survived that too.
Eventually, it escaped.
From my distant location overseas, listening to the news via podcasts is a great way to keep up: something I’m quite grateful to have access to on demand and via the internets. Until the end of Net Neutrality means that only “Verizon Insights” and “Life at AT&T” are still accessible, I enjoy a range of news sources on a daily basis using a podcatcher called Beyond Pod. One of the essential features it has is the ability to speed up the tempo of podcasts, some of which are a bit slow as recorded. But one…. one is like dripping molasses on a winter day: The Daily from the NYT by Michael Barbaro. I’m pretty sure silences are inserted in editing to draw out the drama to infuriating lengths and the tempo of the audio is selectively slowed to about half normal speed. Nobody can actually talk that slowly. I mean listen and try – like actually try to draw out a word that might take 1 second to pronounce to two full seconds. It is a pretty good news summary and has some useful information, but there’s no way I’d suffer through it without setting the tempo to 2x.
Every time I accidentally stream the podcast, rather than downloading and playing, the tempo control is disabled and while I scramble to skip to the next podcast before my I start questioning reality I often wonder for a moment just how bad the pauses are. Here’s my analysis:
I consider the BBC Global News to be a very professional, truly “broadcast quality” podcast. The announcers are clear, comprehensible, and speak at a pace that is appropriate for a news broadcast. I still speed it up because daily life is like that now, but if I listen at normal pace, it isn’t even slightly annoying.
The Economist Radio is fairly typical of a print publication’s efforts at presenting print journalists as audio stars. it doesn’t always work out perfectly and the pacing varies a lot by who is speaking and the rather eclectic line-up. In general it is annoyingly slow, but not interminably so. It comes across as a bit amateur by broadcast standards, but well done and very informative.
But then there’s The Daily from the NYT. This podcast was the reason I took the time to figure out how to speed up playback. There was no other choice: either unsubscribe or speed it up to something not aneurysm inducing. Looking at the waveforms, I suspect they might actually be inserting silences of around 500msec between words, perhaps for dramatic effect (there’s way too much dramatic effect in a lot of the stories, which speeding up only hastens rather than fully alleviating—never have you heard so many interviewees break into uncomfortable tears as they’re overwhelmed by whatever the day’s tragedy is, an artifice only slightly less annoying than broadcasting the sound of someone eating. OMG, that’s real. Rule 34.)
Many years ago (21 years, 9 months as of this post), I used some as-of-then only slightly out of date equipment to record a one week time lapse of the cats’ litter box.
I found the video on a CD-ROM (remember those?) and thought I’d see if it was still usable. It wasn’t – Quicktime had abandoned support for most of the 1990’s era codecs, and as it was pre-internet, there just wasn’t any support any more. I had to fire up my old Mac 9500, which booted just fine after years of sitting, even if most of the rubber feet on the peripherals had long since turned to goo. The OS9 version of QT let me resave as uncompressed, which of course was way too big for the massive dual 9GB drives in that machine. Youtube would eat the uncompressed format and this critical archival record is preserved for a little longer.
I noticed that my avocado tree was developing brown spots on the leaves, which were almost certainly the result of Persea mites.
So I looked up some possible cures, and it seemed like introducing a predator would be the best option and the least hassle. I’d had good luck with introduced ladybugs a few years back, which formed a stable population that survived for many years after introduction. For this pest, green lacewings are recommended. I found a nearby insectary that could provide larvae on cards and they shipped them overnight.
The little guys look cute just waiting to hatch…
I hung he cards on the leaves of the tree after incubating them overnight in a warm room, and they should hatch sometime in the next day or two, as long as the ants don’t find them first…
Update 8 Sept 2016:
The green lacewings seem to have eaten all the mites. It has been 9 months and there aren’t any signs of damage to this spring’s leaves. Yay!
The new leaves that grew seem to be developing without any bites at all. The old leaves that were too damaged have fallen off, but the surviving older leaves still show the scars of the mites. Green lacewings seem to have done the trick.
@United has new coach trays that are coated with a material that has an amazing coefficient of friction. They are not sticky at all—there’s no adhesion effect—it is all friction. Even low surface energy plastics don’t slide on it at all.
The approximately 75-80° angle in the picture is the point at which the cup topples over itself. It isn’t adhered to the surface and it doesn’t appear to slide at all before toppling.
This would be the perfect coating for a smart phone pad in a car.