WordPress

WordPress forward and back navigation I find pleasing

Sunday, May 7, 2023 

A new addition to this site is forward-back navigation at the bottom of posts. I’m not sure why I went 15 years or so without thinking about a reader who might want to browse the scintillating and inspiring topics I cover on a semi-regular basis in details and with such stunning insight but it just never occurred to me until I was messing around with creating topically relevant featured images using Stable Diffusion, but in time the thought came and I’m happy with the result.

This guide from wpbeginner was my starting point.  I used their wpb_posts_nav function as a basis, but removed the SVG arrows and navigation aid text to simplify presentation.  It ends up looking like this, which I simply added to functions.php of my custom child theme using the theme editor just before the final ; and last line ?>.  This version is 100% derivative of the wpbbinner code with some minor deletions to my taste and reversing left/right of previous next as I tend to think of time as flowing left-to-right.

function wpb_posts_nav(){
    $next_post = get_next_post();
    $prev_post = get_previous_post();
      
    if ( $next_post || $prev_post ) : ?>
      
        <div class="wpb-posts-nav">
            <div>
                <?php if ( ! empty( $next_post ) ) : ?>
                    <a href="<?php echo get_permalink( $next_post ); ?>">
                        <div>
                            <div class="wpb-posts-nav__thumbnail wpb-posts-nav__next">
                                <?php echo get_the_post_thumbnail( $next_post, [ 100, 100 ] ); ?>
                            </div>
                        </div>
                        <div>
                            <h4><?php echo get_the_title( $next_post ); ?></h4>
                        </div>
                    </a>
                <?php endif; ?>
            </div>
            <div>
                <?php if ( ! empty( $prev_post ) ) : ?>
                    <a href="<?php echo get_permalink( $prev_post ); ?>">
                        <div>
                            <h4><?php echo get_the_title( $prev_post ); ?></h4>
                        </div>
                        <div>
                            <div class="wpb-posts-nav__thumbnail wpb-posts-nav__prev">
                                <?php echo get_the_post_thumbnail( $prev_post, [ 100, 100 ] ); ?>
                            </div>
                        </div>
                    </a>
                <?php endif; ?>
            </div>
        </div> <!-- .wpb-posts-nav -->
      
    <?php endif;
}

There’s a bit of css that adds some styling.  I made a few small changes to my preferences and removed the bits no longer needed and appended this to the very end of the style.css that comes with my template.

/* Post Navigation Code */

.wpb-posts-nav {
    display: grid;
    grid-template-columns: 1fr 1fr;
    grid-gap: 50px;
    align-items: center;
    max-width: 1200px;
    margin: 25px auto;
}
  
.wpb-posts-nav a {
    display: grid;
    grid-gap: 20px;
    align-items: center;
}
  
.wpb-posts-nav > div:nth-child(1) a {
    grid-template-columns: 100px 1fr;
    text-align: left;
}
  
.wpb-posts-nav > div:nth-child(2) a {
    grid-template-columns: 1fr 100px;
    text-align: right;
}
  
.wpb-posts-nav__thumbnail {
    display: block;
    margin: 0;
}
  
.wpb-posts-nav__thumbnail img {
    border-radius: 0px;
}

The last step is to add the nav function call to your single.php page.  In my page, it works well just after the endwhile, the code is just <?php wpb_posts_nav(); ?> and for my theme is placed thus:

...
		<?php endwhile; // end of the loop. ?>
		<?php wpb_posts_nav(); ?>
		  </div>
...

I’m happy with the results.

https://www.wpbeginner.com/wp-tutorials/how-to-add-next-previous-links-in-wordpress-ultimate-guide/

Posted at 06:51:51 GMT-0700

Category: CodeSelf-publishingTechnology

Sidebar featured images only on single post pages

Tuesday, January 24, 2023 

After updating to WordPress 6.x and updating my theme (Clean Black based) and then merging the customizations back in with meld (yes, I really should do a child theme but this is a pretty simple theme so meld is fine), I didn’t really like the way the post thumbnails are shown, prefering to keep it to the right.   I mean clean black was last updated in 2014 and while it still works fine, but that was a while ago.  Plus I had hand-coded a theme sometime in the naughties and wanted to more or less keep it while taking advantage of some of the responsive features introduced about then.

Pretty much any question one might have, someone has asked it before, and I found some reasonable solutions, some more complex than others.  There’s a reasonable 3 modification solution that works by creating another sidebar.php file (different name, same function) that gets called by single.php (and not the main page) that has the modification you want, but that seemed unnecessarily complicated.  I settled on a conditional test is_singular which works to limit the get_the_post_thumbnail call to where I wanted and not to invoke it elsewhere.  A few of the other options on the same stackexchange thread didn’t work for me, your install may be different.  What I settled on (including a map call for geo-tagged posts) is:

<div id="sidebar">
	
      <?php if (is_singular('post') ) {
           echo get_the_post_thumbnail( $post->ID, 'thumbnail');
           echo GeoMashup::map('height=150&width=300&zoom=5&add_overview_control=false&add_map_type_control=false&add_map_control=false');
           } ?>

	<div class="widgetarea">
	
	<ul id="sidebarwidgeted">

<?php if (!dynamic_sidebar('Sidebar Top') ) : ?>
		
	<?php endif; ?>

	</ul>
	
	</div>
	
</div>

And I get what i was looking for, a graphical anchor at the top of the single post (but not pages) for the less purely lexically inclined that didn’t clutter the home page or other renderings with a wee bit o php.

Posted at 17:10:36 GMT-0700

Category: HowToLinuxSelf-publishing

Some gnuplot and datamash adventures

Thursday, December 29, 2022 

I’ve been collecting data on the state of the Ukrainian digital network since about the start of the war on a daily basis, some details of the process are in this post.  I was creating and updating maps made with qgis when particularly notable things happened, generally correlated with significant damage to the Ukrainian power infrastructure (and/or data infrastructure).  I wanted a way to provide a live update of the feed, and as all such projects go, the real reward was the friends made along the way to an automatically updated “live” summary stats table and graph.

My data collection tools generate some rather large CSV files for the mapping tools, but to keep a running summary, I also extract the daily total of responding servers and compute the day over day change and append those values to a running tally CSV file.  A few really great tools from the Free Software Foundation help turn this simple data structure into a nicely formatted (I think) table and graph: datamash and gnuplot. I’m not remotely expert enough to get into the full details of these excellent tools, but I put together some tricks that are working for me and might help someone else trying to do something similar.

Using datamash for Statistical Summaries

Datamash is a great command line tool for getting statistics from text files like logs or CSV files or other relatively accessible and easily managed data sources.  It is quite a bit easier to use and less resource intensive than R, or Gnu Octave, but obviously also much more limited. I really only wanted very basic statistics and wanted to be able to get to them from Bash with a cron job calling a simple script and for that sort of work, datamash is the tool of choice.

Basic statistics are easy to compute with datamash; but if you want a thousands grouped comma delimited median value of a data set that looks like 120,915 (say), you might need a slightly more complicated (but still one-liner) command like this:

Median="$(/usr/bin/datamash -t, median 2  < /trend.csv | datamash round 1 | sed ':a;s/\B[0-9]\{3\}\>/,&/;ta')"

Median=               Assign the result to the variable $Median
-t,                   Comma delimited (instead of tab, default)
median                one of a bazillion stats datamash can compute
2                     use column two of the CSV data set.
&lt; /trend.csv          feed the previous command a CSV file nom nom
| datamash round 1    pipe the result back to datamash to round the decimals away
| sed (yadda yadda)   pipe that result to sed to insert comma thousands separator*

*HT @sim

Once I have these values properly formatted as readable strings, I needed a way to automatically insert those updates into a consistently formatted table like this:

Sample statistics chart

I first create a dummy table with a plugin called TablePress with target dummy values (like +++Median) which I then extract as HTML and save as a template for later modification. With the help of a little external file inclusion code into WordPress, you can pull that formatted but now static HTML back into the post from a server-side file.  Now all you need to do is modify the HTML file version of the table using sed via a cron job to replace the dummy values with the datamash computed values and then scp the table code with updated data to the server so it is rendered into the viewed page:

sed -i -e "s/+++Median/$Median/g" "stats_table.html"
/usr/bin/sshpass -P assphrase -f '~/.pass' /usr/bin/scp -r stats_table.html user@site.org:/usr/local/www/wp-content/uploads/stats_table.html

For this specific application the bash script runs daily via cron with appropriate datamash lines and table variable replacements to keep the table updated on a daily basis.  It first copies the table template into a working directory, computes the latest values with datamash, then seds those updated values into the working copy of the table template, and scps that over the old version in the wp-content directory for visitor viewing pleasure.

Using gnuplot for Generating a Live Graph

The basic process of providing live data to the server is about the same.  A different wordpress plugin, SVG Support, adds support for SVG filetypes within WordPress.  I suspect this is not default since svg can contain active code, but a modern website without SVG support is like a fish without a bicycle, isn’t it? SVG is useful in this case in another way, the summary page integrates a scaled image which is linked to the full size SVG file.  For bitmapped files, the scaled image (or thumbnail) is generated by downsampling the original (with ImageMagick, optimally, not GD) and that needs an active request (i.e. PHP code) to update.  In this case, there’s no need since the SVG thumbnail is the just the original file resized—SVG: Scalable Vector Graphics FTW.

Gnuplot is a impressively full-featured graphing tool with a complex command structure.  I had to piece together some details from various sources and then do some sedding to get the final touches as I wanted them.  As every plot is different, I’ll just document the bits I pieced together myself, the plotting details go in the gnuplot command script, the other bits in a bash script executed later to add some non-standard formatting to the gnuplot svg output.

Title of the plot

The SVG <title> block is set as “Gnuplot” and I don’t see any way to change that from the command line, so I replaced it with the title I wanted, using a variable for the most recently updated data point extracted by datamash as above:

sed -i -e "s/<title>Gnuplot<\/title>/<title>Ukrainian Servers Responding on port 80 from 2022-03-05 to $LDate<\/title>/g" "/UKR-server-trend.svg" sed -i -e "s/<desc>Produced by GNUPLOT 5.2 patchlevel 2 <\/desc>/<desc>Daily automated update of Ukrainian server response statistics.<\/desc>/g" "/UKR-server-trend.svg"

This title value is used as the tab title.  I’m not sure where the <desc> will show up, but likely read by various spiders and is an accessibility thing for online readers.

Last Data Point

I wanted the most recent server count to be visible at the end of the plot.  This takes two steps: first plot that data point alone with a label (but no title so it doesn’t show up in the data key/legend) by adding a separate plot of just that last datum like:

"< tail -n 1 '/trend.csv'" u 1:2:2 w labels notitle

This works fine, but if you hover over the data point, it just pops up “gnuplot_plot_4” and I’d rather have more useful data so I sed that out and replace it with some values I got from datamash queries earlier in the script like so:

sed -i -e "s/<title>gnuplot_plot_4<\/title>/<title>Tot: $LTot; Diff: $LDif<\/title>/g" "/UKR-server-trend.svg"
Adding Link Text

SVG supports clickable links, but you can’t (I don’t think) define those URLs in the label command.  So first set the visible text with a simple gnuplot label command:

set label "Black Rose Technology https://brt.llc" at graph 0.07,0.03 center tc rgb "#693738" font "copperplate,12"

and then enhance the resulting svg code with a link using good old sed:

sed -i -e "s#<text><tspan font-family=\"copperplate\" >Black Rose Technology https://brt.llc</tspan></text>#<a xlink:href=\"https://brt.llc/\" target=\"__blank\"><text><tspan font-family=\"copperplate\" >Black Rose Technology https://brt.llc</tspan></text></a>#g" "/UKR-server-trend.svg"
Hovertext for the Delta Bars

Adding hovertext to the ends of the daily delta bars was a bit more involved.  The SVG <title> type is interpreted by most browsers as a hoverable element but adding visible data labels to the ends of the bars makes the graph icky noisy.  Fortunately SVG supports transparent text. To get all this to work, I replot the entire bar graph data series as just labels like so:

'/trend.csv' using 1:3:3 with labels font "arial,4" notitle axes x1y2

But this leaves a very noisy looking graph, so we pull out our trusty sed to set opacity to “0” so they’re hidden:

sed -i -e "s/\(stroke=\"none\" fill=\"black\"\)\( font-family=\"arial\" font-size=\"4.00\"\)/\1 opacity=\"0\"\2/g" "/UKR-server-trend.svg"

and then find the data value and generate a <title> element of that data value using back-references.  I must admit, I have not memorized regular expressions to the point where I can just write these and have them work on the first try: gnu’s sed tester is very helpful.

sed -i -e "s/\(<text><tspan font-family=\"arial\" >\)\([-1234567890]*\)<\/tspan><\/text>/\1\2<title>\2<\/title><\/tspan><\/text>/g" "/UKR-server-trend.svg"

And you get hovertext data interrogation.  W00t!

Sample of gnuplot showing hovertext

Note that cron jobs are executed with different environment variables than user executed scripts, which can result in date formatting variations (which can be set explicitly in gnuplot) and thousands separator and decimal characters (,/.). To get consistent results with a cron job, explicitly set the appropriate locale, either in the script like

#!/bin/bash
LC_NUMERIC=en_US.UTF-8
...

or for all cron jobs as in crontab -e

LC_NUMERIC=en_US.UTF-8
MAILTO=user@domain.com
# .---------------- minute (0 - 59) 
# |    .------------- hour (0 - 23)
# |    |      .---------- day of month (1 - 31)
# |    |      |    .------- month (1 - 12) OR jan,feb,mar,apr ... 
# |    |      |    |    .---- day of week (0 - 6) (Sunday=0 or 7)  OR sun,mon,tue,wed,thu,fri,sat 
# |    |      |    |    |
# *    *      *    *    *    <command to be executed>

 

The customized SVG file is SCPd to the server as before, replacing the previous day’s.  Repeat visitors might have to clear their cache.  It’s also important to disable caching on the site for the page, for example if using wp super cache or something, because there’s no signal to the cache management engine that the file has been updated.

Posted at 05:19:01 GMT-0700

Category: GeopostHowToLinuxTechnology

WebP and SVG

Tuesday, September 1, 2020 

Using WebP coded images inside SVG containers works.  I haven’t found any automatic way to do it, but it is easy enough manually and results in very efficiently coded images that work well on the internets.  The manual process is to Base64 encode the WebP image and then open the .svg file in a text editor and replace the

xlink:href="data:image/png;base64, ..."

with

xlink:href="data:image/webp;base64,..."

(“…” means the appropriate data, obviously).


Back in about 2010 Google released the spec for WebP, an image compression format that provides a roughly 2-4x coding efficiency over the venerable JPEG (vintage 1974), derived from the VP8 CODEC they bought from ON2. VP8 is a contemporary of and technical equivalent to H.264 and was developed during a rush of innovation to replace the aging MPEG-II standard that included Theora and Dirac. Both VP8 and H.264 CODECs are encumbered by patents, but Google granted an irrevocable license to all patents, making it “open,” while H.264s patents compel licensing from MPEG-LA.  One would think this would tend to make VP8 (and the WEBM container) a global standard, but Apple refused to give Google the win and there’s still no native support in Apple products.

A small aside on video and still coding techniques.

All modern “lossy” (throwing some data away like .mp3, as opposed to “lossless” meaning the original can be reconstructed exactly, as in .flac) CODECs are founded on either Discrete Cosine Transform (DCT) or Wavelet (DWT) encoding of “blocks” of image data.  There are far more detailed write ups online that explain the process in detail, but the basic concept is to divide an image into small tiles of data then apply a mathematical function that converts that data into a form which sorts the information from least human-perceptible to most human-perceptible and sets some threshold for throwing away the least important data while leaving the bits that are most important to human perception.

Wavelets are promising, but never really took off, as in JPEG2000 and Dirac (which was developed by the BBC).  It is a fairly safe bet that any video or still image you see is DCT coded thanks to Nasir Ahmed, T. Natarajan and K. R. Rao.  The differences between 1993’s MPEG-1 and 2013’s H.265 are largely around how the data that is perceptually important is encoded in each still (intra-frame coding) and some very important innovations in inter-frame coding that aren’t relevant to still images.

It is the application of these clever intra-frame perceptual data compression techniques that is most relevant to the coding efficiency difference between JPEG and WebP.

Back to the good stuff…

Way back in 2010 Google experimented with the VP8 intra-coding techniques to develop WebP, a still image CODEC that had to have two core features:

  • better coding efficiency than JPEG,
  • ability to handle transparency like .png or .tiff.

This could be the one standard image coding technique to rule them all – from icons to gigapixel images, all the necessary features and many times better coding efficiency than the rest.  Who wouldn’t love that?

Apple.

Of course it was Apple.  Can’t let Google have the win.  But, finally, with Safari 14 (June 22, 2020 – a decade late!) iOS users can finally see WebP images and websites don’t need crazy auto-detect 1974 tech substitution tricks.  Good job Apple!

It may not be a coincidence that Apple has just released their own still image format based on the intra-frame coding magic of H.265, .heif and maybe they thought it might be a good idea to suddenly pretend to be an open player rather than a walled-garden-screw-you lest iOS insta-users wall themselves off from the 90% of the world that isn’t willing to pay double to pose with a fashionable icon in their hands.  Not surprisingly, .heic, based on H.265 developments is meaningfully more efficient than WebP based on VP8/H.264 era techniques, but as it took WebP 10 years to become a usable web standard, I wouldn’t count on .heic  having universal support soon.

Oh well.  In the mean time, VP8 gave way to VP9 then to VP10, which has now AV1, arguably a generation ahead of HEVC/H.265.  There’s no hardware decode (yet, as of end of 2020) but all the big players are behind it, so I expect 2021 devices will and GPU decode will come in 2021. By then, expect VVC (H.266) to be replacing HEVC (H.265) with a ~35% coding efficiency improvement. 

Along with AV1’s intra/inter-frame coding advance, the intra-frame techniques are useful for a still format called AVIF, basically AVIF is to AV1 (“VP11”) what WEBP is to VP8 and HEIF is to HEVC. So far (Dec 2020) only Chrome and Opera support AVIF images.

Then, of course, there’s JPEG XL on the way.  For now, the most broadly supported post-JPEG image codec is WEBP.

SVG support in browsers is a much older thing – Apple embraced it early (SVG was not developed by Google so….) and basically everything but IE has full support (IE…  the tool you use to download a real browser).  So if we have SVG and WebP, why not both together?

Oddly I can’t find support for this in any of the tools I use, but as noted at the open, it is pretty easy.  The workflow I use is to:

  • generate a graphic in GIMP or Photoshop or whatever and save as .png or .jpg as appropriate to the image content with little compression (high image quality)
  • Combine that with graphics in Inkscape.
  • If the graphics include type, convert the type to SVG paths to avoid font availability problems or having to download a font file before rendering the text or having it render randomly.
  • Save the file (as .svg, the native format of Inkscape)
  • Convert the image file to WebP with a reasonable tool like Nomacs or Ifranview.
  • Base64 encode the image file, either with base64 # base64 infile.webp > outfile.webp.b64 or with this convenient site
  • If you use the command line option the prefix to the data is “data:image/webp;base64,
  • Replace the … on the appropriate xlink:href="...." with the new data using a text editor like Atom (RIP).
  • Drop the file on a browser page to see if it works.

WordPress blocks .svg uploads without a plugin, so you need one

The picture is 101.9kB and tolerates zoom quite well. (give it a try, click and zoom on the image).

 

Posted at 08:54:16 GMT-0700

Category: CodeHowToLinuxphotoSelf-publishingTechnology

Will G+ Eat RSS, or Insist on Sole Ownership?

Thursday, July 21, 2011 

Weird: I have yet to find a way to import an RSS feed into G+. This is one of those things that significantly undermines Google’s “your data” cred. Anyone know of a way to do it? I haven’t found an “import RSS feed into your feed” the way facebook kinda does and the wordpress/facebook plugin does.

I’m a very strong believer in “he who owns the hardware, owns the data,” so, for example, posting this on G+ means that this text is Google’s (note, this was originally published on G+, but G+ cratered proving the cloud is ephemeral and public,, then I stole it back!). And since it didn’t originate on my personal wordpress installation (free as in speech, free as in beer) running on my server at home (free as in speech, not absurdly expensive as in cheap beer), it isn’t mine.

My server also runs my mail server, my file server, my web server etc. all from my garage meaning that’s my data and my hardware and fully protected by law, while any data on Google’s server is effectively shared with every good and bad government in the world and my only legal recourse if it gets hacked or stolen or sold or given away or simply deleted is to… write an angry post on my blog and swear never to trust a cloud service again.

This is, obviously, exactly the same at FaceBook and every other cloud service. I use Facebook as a syndication service: I post on my own servers and syndicate via RSS to FaceBook, which becomes, in effect, the most frequently used RSS reader should people who haven’t gotten around to blocking me in their streams might find and by which perhaps occasionally be amused. This means I still own my data and my data has no particular dependence on FaceBook’s survival.

This post is visible only as long as Google wants it to be.  If Google changes the rules, I lose the data.  OK, I can download it – as long as they choose to let me, but it isn’t my data. When I post on my server then give FaceBook permission to republish the data, I control my data and they get only what I decide to give them. When I post this on Google and then ask “please, sir, may I recover my post for another use?” the power relationship is reversed: Google owns and controls everything and my rights and usage are only what they deign to offer me.

That almost everyone trusts the billionaire playboys who put king sized beds in their 767 party plane as “do no evilparagons of virtue is odd to me, but nothing better validates Erich Fromm’s thesis than the pseudo-religious idolatry of Google and Apple.  Still, even the True Believers should realize that the founders of these Great Empires are not truly immortal and that even if Google is doing no evil now, it will change hands and those that inherit every search you’ve ever done, every web page you’ve ever visited, every email you’ve ever sent, every phone call you’ve ever made or received, the audio of every message ever left for you, the GPS traces of every step you’ve ever taken, every text and chat and tweet might think, say, that Doing Good means something different than you think it does.  One should also remember the Socratic Paradox that renders tautological Google’s vaunted motto.

Unfortunately, at least so far, Google won’t let me use G+ to syndicate my data – they insist on owning it and dictating the terms by which I can access it. If I want to syndicate content through my G+ network, it seems I have to fully gift Google that content. I’m hoping there’s a tool to populate my “posts” from RSS so the canonical will remain on my server. Because it is the Right Thing To Do.

(Shhhh..  I’m going to copy and paste this into my own wordpress installation, even though I wrote it here on the G+ interface.  They probably won’t send me a DMCA takedown, but I do run the risk that they’ll hit me with a “duplicate content penalty” and set my page rank to 0 thus ensuring nobody ever finds my site again.  Ah, absolute power, so reassuring to remember that it is absolutely incorruptible.)

Posted at 11:47:35 GMT-0700

Category: PoliticsSelf-publishingTechnology

First try posting by email

Thursday, June 7, 2007 

This is an attempt to post to my blog via email. If it works, then one postie cron job does the trick, otherwise I’ll have to set up a postie cron for each blog on the system… let’s find out!

Read more…

Posted at 11:52:50 GMT-0700

Category: Self-publishingTechnology