# Favicon generation script

Monday, December 21, 2020

Favicons are a useful (and fun) part of the browsing experience.  They once were simple – just an .ico file of the right size in the root directory.  Then things got weird and computing stopped assuming an approximate standard ppi for displays, starting with mobile and “retina” displays.  The obvious answer would be .svg favicons, but, wouldn’t’ya know, Apple doesn’t support them (neither does Firefox mobile) so for a few more iterations, it still makes sense to generate an array of sizes with code to select the right one.  This little tool pretty much automates that from a starting .svg file.

There are plenty of good favicon scripts and tools on the interwebs. I was playing around with .svg sources for favicons and found it a bit tedious to generate the sizes considered important for current (2020-ish) browsing happiness. I found a good start at stackexchnage by @gary, though the sizes weren’t current recommended (per this github project). Your needs may vary, but it is easy enough to edit.

The script relies on the following wonderful FOSS tools:

These are available in most distros (software manager had them in Mint 19).

Note that my version leaves the format as .png – the optimized png will be many times smaller than the .ico format and png works for everything except IE<11, which nobody should be using anyway.  The favicon.ico generated is 16, 32, and 48 pixels in 3 different layers from the 512×512 pixel version.

The command line options for inkscape changed a bit, the bash script below has been updated to reflect current.

The code below can be saved as a bash file, set execution bit, and call as ./favicon file.svg and off you go:

#!/bin/bash

# this makes the output verbose
set -ex

# collect the file name you entered on the command line (file.svg)
svg=$1 # set the sizes to be generated (plus 310x150 for msft) size=(16 32 70 128 150 152 167 180 192 310 512) # set the write director as a favicon directory below current out="$(pwd)"
out+="/favicon"
mkdir -p $out echo Making bitmaps from your svg... for i in${size[@]}; do
inkscape -o "$out/favicon-$i.png" -w $i -h$i $svg done # Microsoft wide icon (annoying, probably going away) inkscape -o "$out/favicon-310x150.png" -w 310 -h 150 $svg echo Compressing... for f in$out/*.png; do pngquant -f --ext .png "$f" --posterize 4 --speed 1 ; done; echo Creating favicon convert$out/favicon-512.png -define icon:auto-resize=48,32,16 \$out/favicon.ico

echo Done


Copy the .png files generated above as well as the original .svg file into your root directory (or, if in a sub-directory, add the path below), editing the “color” of the Safari pinned tab mask icon. You might also want to make a monochrome version of the .svg file and reference that as the “mask-icon” instead, it will probably look better, but that’s more work.

The following goes inside the head directives in your index.html to load the correct sizes as needed (delete the lines for Microsoft’s browserconfig.xml file and/or Android’s manifest file if not needed.)

<!-- basic svg -->

<!-- generics -->

<!-- Android -->

<!-- iOS -->

<!-- Windows -->
<meta name="msapplication-config" content="/browserconfig.xml" />


For WordPress integration, you don’t have access to a standard index.html file, and there are crazy redirects happening, so you need to append to your theme’s functions.php file with the below code snippet wrapped around the above icon declaration block (optimally your child theme unless you’re a theme developer since it’ll get overwritten on update otherwise):

/* Allows browsers to find favicons */
?>
REPLACE THIS LINE WITH THE BLOCK ABOVE
<?php
};

Then, just for Windows 8 & 10, there’s an xml file to add to your directory (root by default in this example) Also note you need to select a color for your site, which has to be named “browserconfig.xml

<?xml version="1.0" encoding="utf-8"?>
<browserconfig>
<msapplication>
<tile>
<square70x70logo src="/favicon-70.png"/>
<square150x150logo src="/favicon-150.png"/>
<wide310x150logo src="/favicon-310x150.png"/>
<square310x310logo src="/favicon-310.png"/>
<TileColor>#ff8d22</TileColor>
</tile>
</msapplication>
</browserconfig>


There’s one more file that’s helpful for mobile compatibility, the android save to desktop file, “manifest.json“.  This requires editing and can’t be pure copy pasta.  Fill in the blanks and select your colors

{
"name": "",
"short_name": "",
"description": "",
"start_url": "/?homescreen=1",
"icons": [
{
"src": "/favicon-192.png",
"sizes": "192x192",
"type": "image/png"
},
{
"src": "/favicon-512.png",
"sizes": "512x512",
"type": "image/png"
}
],
"theme_color": "#ffffff",
"background_color": "#ff8d22",
"display": "standalone"
}


Check the icons with this favicon tester (or any other).

Manifest validation: https://manifest-validator.appspot.com/

Posted at 17:26:44 GMT-0700

Category: HowToLinuxself-publishing

# WebP and SVG

Tuesday, September 1, 2020

Using WebP coded images inside SVG containers works.  I haven’t found any automatic way to do it, but it is easy enough manually and results in very efficiently coded images that work well on the internets.  The manual process is to Base64 encode the WebP image and then open the .svg file in a text editor and replace the

xlink:href="data:image/png;base64, ..."

with

xlink:href="data:image/webp;base64,..."

(“…” means the appropriate data, obviously).

Back in about 2010 Google released the spec for WebP, an image compression format that provides a roughly 2-4x coding efficiency over the venerable JPEG (vintage 1974), derived from the VP8 CODEC they bought from ON2. VP8 is a contemporary of and technical equivalent to H.264 and was developed during a rush of innovation to replace the aging MPEG-II standard that included Theora and Dirac. Both VP8 and H.264 CODECs are encumbered by patents, but Google granted an irrevocable license to all patents, making it “open,” while H.264s patents compel licensing from MPEG-LA.  One would think this would tend to make VP8 (and the WEBM container) a global standard, but Apple refused to give Google the win and there’s still no native support in Apple products.

A small aside on video and still coding techniques.

All modern “lossy” (throwing some data away like .mp3, as opposed to “lossless” meaning the original can be reconstructed exactly, as in .flac) CODECs are founded on either Discrete Cosine Transform (DCT) or Wavelet (DWT) encoding of “blocks” of image data.  There are far more detailed write ups online that explain the process in detail, but the basic concept is to divide an image into small tiles of data then apply a mathematical function that converts that data into a form which sorts the information from least human-perceptible to most human-perceptible and sets some threshold for throwing away the least important data while leaving the bits that are most important to human perception.

Wavelets are promising, but never really took off, as in JPEG2000 and Dirac (which was developed by the BBC).  It is a fairly safe bet that any video or still image you see is DCT coded thanks to Nasir Ahmed, T. Natarajan and K. R. Rao.  The differences between 1993’s MPEG-1 and 2013’s H.265 are largely around how the data that is perceptually important is encoded in each still (intra-frame coding) and some very important innovations in inter-frame coding that aren’t relevant to still images.

It is the application of these clever intra-frame perceptual data compression techniques that is most relevant to the coding efficiency difference between JPEG and WebP.

Back to the good stuff…

Way back in 2010 Google experimented with the VP8 intra-coding techniques to develop WebP, a still image CODEC that had to have two core features:

• better coding efficiency than JPEG,
• ability to handle transparency like .png or .tiff.

This could be the one standard image coding technique to rule them all – from icons to gigapixel images, all the necessary features and many times better coding efficiency than the rest.  Who wouldn’t love that?

Apple.

Of course it was Apple.  Can’t let Google have the win.  But, finally, with Safari 14 (June 22, 2020 – a decade late!) iOS users can finally see WebP images and websites don’t need crazy auto-detect 1974 tech substitution tricks.  Good job Apple!

It may not be a coincidence that Apple has just released their own still image format based on the intra-frame coding magic of H.265, .heif and maybe they thought it might be a good idea to suddenly pretend to be an open player rather than a walled-garden-screw-you lest iOS insta-users wall themselves off from the 90% of the world that isn’t willing to pay double to pose with a fashionable icon in their hands.  Not surprisingly, .heic, based on H.265 developments is meaningfully more efficient than WebP based on VP8/H.264 era techniques, but as it took WebP 10 years to become a usable web standard, I wouldn’t count on .heic  having universal support soon.

Oh well.  In the mean time, VP8 gave way to VP9 then to VP10, which has now AV1, arguably a generation ahead of HEVC/H.265.  There’s no hardware decode (yet, as of end of 2020) but all the big players are behind it, so I expect 2021 devices will and GPU decode will come in 2021. By then, expect VVC (H.266) to be replacing HEVC (H.265) with a ~35% coding efficiency improvement.

Along with AV1’s intra/inter-frame coding advance, the intra-frame techniques are useful for a still format called AVIF, basically AVIF is to AV1 (“VP11”) what WEBP is to VP8 and HEIF is to HEVC. So far (Dec 2020) only Chrome and Opera support AVIF images.

Then, of course, there’s JPEG XL on the way.  For now, the most broadly supported post-JPEG image codec is WEBP.

SVG support in browsers is a much older thing – Apple embraced it early (SVG was not developed by Google so….) and basically everything but IE has full support (IE…  the tool you use to download a real browser).  So if we have SVG and WebP, why not both together?

Oddly I can’t find support for this in any of the tools I use, but as noted at the open, it is pretty easy.  The workflow I use is to:

• generate a graphic in GIMP or Photoshop or whatever and save as .png or .jpg as appropriate to the image content with little compression (high image quality)
• Combine that with graphics in Inkscape.
• If the graphics include type, convert the type to SVG paths to avoid font availability problems or having to download a font file before rendering the text or having it render randomly.
• Save the file (as .svg, the native format of Inkscape)
• Convert the image file to WebP with a reasonable tool like Nomacs or Ifranview.
• Base64 encode the image file, either with base64 # base64 infile.webp > outfile.webp.b64 or with this convenient site
• If you use the command line option the prefix to the data is “data:image/webp;base64,
• Replace the … on the appropriate xlink:href="...." with the new data using a text editor like Atom.
• Drop the file on a browser page to see if it works.

WordPress blocks .svg uploads without a plugin, so you need one

The picture is 101.9kB and tolerates zoom quite well. (give it a try, click and zoom on the image).

Posted at 08:54:16 GMT-0700

Category: HowToLinuxphotoself-publishingtechnology

# Open letter to the FCC 5 regarding net neutrality

Saturday, November 25, 2017

I’m in favor of net neutrality for a lot of reasons; a personal reason is that I rely on fair and open transport of my bits to work overseas.  If you happen to find this little screed, you can also thank net neutrality for doing so as any argument for neutrality will likely be made unavailable by the ISPs that should charge exorbitant rents for their natural monopolies and would be remiss in their fiduciary responsibility should they fail to take every possible step to maximize shareholder value, for example by permitting their customers access to arguments contrary to their financial or political interests.

I sent the following to the FCC 5.  I am not, I’m sorry to say, optimistic.

Please protect Net Neutrality. It is essential to my ability to operate in Iraq, where I run a technical security business that relies on access to servers and services in the United States. If access to those services becomes subject to a maze of tiered access limitations and tariffs, rather than being treated universally as flat rate data, my business may become untenable unless I move my base of operations to a net neutrality-respecting jurisdiction. The FCC is, at the moment, the only bulwark against a balkanization of data and the collapse of the value premise of the Internet.

While I understand and am sympathetic to both a premise that less government regulation is better in principal and that less regulated markets can be more efficient; this “invisible hand” only works to the benefit in a “well regulated market.” There are significant cases where market forces cannot be beneficial, for example, where the fiduciary responsibility of a company to maximize share-holder value compels exploitation of monopoly rents to the fullest extent permitted by law and, where natural monopolies exist, only regulation prevents those rents from becoming abusive. Delivery of data services is a clear example of one such case, both due to the intrinsic monopoly of physical deployment of services through public resources and due to inherent opportunities to exert market distorting biases into those services to promote self-beneficial products and inhibit competition. That this might happen is not idle speculation: network services companies have routinely attempted to unfairly exploit their positions to their benefit and to the harm of fair and open competition and in many cases were restrained only by existing net neutrality laws that the FCC is currently considering rescinding. The consequences of rescinding net neutrality will be anti-competitive, anti-productive, and will stifle innovation and economic growth.

While it is obvious and inevitable that network companies will abuse their natural monopolies to stifle competition, as they have attempted many times restrained only by previous FCC enforcement of the principal of net neutrality, rescinding net neutrality also poses a direct risk to the validity of democracy. While one can argue that Facebook has already compromised democracy by becoming the world’s largest provider of news through an extraordinarily easily manipulated content delivery mechanism, there’s no evidence that they have yet exploited this to achieve any particular political end nor actively censored criticism of their practices. However, without net neutrality there is no legal protection to inhibit carriers from exploiting their control over content delivery to promote their corporate or political interests while censoring embarrassing or opposing information. As the vast majority of Americans now get their news from on-line resources, control over the delivery of those resources becomes an extraordinarily powerful political weapon; without net neutrality it is perfectly legal for corporations to get “their hands on those weapons” and deploy them against their economic and political adversaries.

Under an implicit doctrine of net neutrality from a naive, but then technically accurate, concept of the internet as a packet network that would survive a nuclear war and that would treat censorship as “damage” and “route around it automatically,” to 2005’s Madison River ruling, to the 2008 Comcast ruling, to 2010’s Open Internet Order the internet has flourished as an open network delivering innovative services and resources that all businesses have come to rely on fairly and equally. Overturning that historical doctrine will result in a digital communications landscape in the US that resembles AT&Ts pre-breakup telephone service: you will be permitted to buy only the services that your ISP deems most profitable to themselves. In the long run, if net neutrality is not protected, one can expect the innovation that has centered in the US since the birth of the internet, which some of us remember as the government sponsored innovation ARPAnet, to migrate to less corporatist climates, such as Europe, where net neutrality is enshrined in law.

The American people are counting on you to protect us from such a catastrophic outcome.

Do not reverse the 2015 Open Internet Order.

Sincerely,

David Gessel

Posted at 10:27:50 GMT-0700

# Kitty Poop

Tuesday, June 27, 2017

Many years ago (21 years, 9 months as of this post), I used some as-of-then only slightly out of date equipment to record a one week time lapse of the cats’ litter box.

I found the video on a CD-ROM (remember those?) and thought I’d see if it was still usable. It wasn’t – Quicktime had abandoned support for most of the 1990’s era codecs, and as it was pre-internet, there just wasn’t any support any more. I had to fire up my old Mac 9500, which booted just fine after years of sitting, even if most of the rubber feet on the peripherals had long since turned to goo. The OS9 version of QT let me resave as uncompressed, which of course was way too big for the massive dual 9GB drives in that machine. Youtube would eat the uncompressed format and this critical archival record is preserved for a little longer.

Posted at 15:16:46 GMT-0700

Category: catsfilmsfunnyoddself-publishingvanity sitesvideo

# Xabber now uses Orbot: OTR+Tor

Sunday, November 3, 2013

As of Sept 30 2013, Xabber added Orbot support. This is a huge win for chat security. (Gibberbot has done this for a long time, but it isn’t as user-friendly or pretty as Xabber and it is hard to convince people to use it).

The combination of Xabber and Orbot solves the three most critical problems in chat privacy: obscuring what you say via message encryption, obscuring who you’re talking to via transport encryption, and obscuring what servers to subpoena for at least the last information by onion routing. OTR solves the first and Tor fixes the last two (SSL solves the middle one too, though Tor has a fairly secure SSL ciphersuite, who knows what that random SSL-enabled chat server uses – “none?”)

There’s a fly in the ointment of all this crypto: we’ve recently learned a few entirely predictable (and predicted) things about how communications are monitored:

1) All communications are captured and stored indefinitely. Nothing is ephemeral; neither a phone conversation nor an email, nor the web sites you visit. It is all stored and indexed should somebody sometime in the future decide that your actions are immoral or illegal or insidious or insufficiently respectful this record may be used to prove your guilt or otherwise tag you for punishment; who knows what clever future algorithms will be used in concert with big data and cloud services to identify and segregate the optimal scapegoat population for whatever political crises is thus most expediently deflected. Therefore, when you encrypt a conversation it has to be safe not just against current cryptanalytic attacks, but against those that might emerge before the sins of the present are sufficiently in the past to exceed the limitations of whatever entity is enforcing whatever rules. A lifetime is probably a safe bet. YMMV.

2) Those that specialize in snooping at the national scale have tools that aren’t available to the academic community and there are cryptanalytic attacks of unknown efficacy against some or all of the current cryptographic protocols. I heard someone who should know better poo poo the idea that the NSA might have better cryptographers than the commercial world because the commercial world pays better, as if the obsessive brilliance that defines a world-class cryptographer is motivated by remuneration. Not.

But you can still do better than nothing while understanding that a vulnerability to the NSA isn’t likely to be an issue for many, though if PRISM access is already being disseminated downstream to the DEA, it is only a matter of time before politically affiliated hate groups are trolling emails looking for evidence of moral turpitude with which to tar the unfaithful. Any complacency that might be engendered by not being a terrorist may be short lived. Enjoy it while it lasts.

But there’s still a flaw. You’re probably using Google. So anyone can just go to Google and ask them who you were chatting with, for how long, and about how many words you exchanged. The content is lost, but there’s a lot of meta-data there to play with.

So don’t use gchat if you care about that. It isn’t that hard to set up a chat server.

But maybe you’re a little concerned that your ISP not know who you’re chatting with. Given that your ISP (at the local or national level) might have a bluecoat device and could easily be man-in-the-middling every user on their network simultaneously, you might have reason to doubt Google’s SSL connection. While OTR still protects the content of your chat, an inexpensive bluecoat device renders the meta information visible to whoever along your coms path has bought one. This is where Tor comes in. While Google will still know (you’re still using Google even after they lied to you about PRISM and said, in court, that nobody using Gmail has any reasonable expectation of privacy?) your ISP (commercial or national) is going to have a very hard time figuring out that you’re even talking to Google, let alone with whom. Even the fact that you’re using chat is obscured.

So give Xabber a try. Check out Orbot, the effortless way to run it over Tor. And look into alternatives to cloud providers for everything you do.

Posted at 08:50:47 GMT-0700

Category: FreeBSDself-publishingtechnology

# Yahoo account PSA

Sunday, March 17, 2013

It seems that if you have a yahoo mail account it either already has or  will soon be hacked. There’s some news out there about this…..

Yes, how could you not be sure that when somebody offers to host your  personal data for free on their servers that nothing could possib-lie go  wrong. Uh, PossibLY go wrong.

Posted at 08:08:01 GMT-0700

Category: politicsself-publishingtechnology

# You can’t read this at the Westin

Monday, December 26, 2011

Oddly, this server is blocked by the network at the Westin Grand, Berlin.  Everything else seems to work, even www.dis.org (which is blocked by sites that subscribe to the  Barracuda filter list, cause any site with information on radios is frequented by hackerz).  It does not seem to be a national level block as I get plenty of visitors from Germany.

Easy enough to get around by VPN, but odd.  Very odd indeed.

Posted at 09:02:40 GMT-0700

Category: hotelsself-publishingtechnologytravel

# Will G+ Eat RSS, or Insist on Sole Ownership?

Thursday, July 21, 2011

Weird: I have yet to find a way to import an RSS feed into G+. This is one of those things that significantly undermines Google’s “your data” cred. Anyone know of a way to do it? I haven’t found an “import RSS feed into your feed” the way facebook kinda does and the wordpress/facebook plugin does.

I’m a very strong believer in “he who owns the hardware, owns the data,” so, for example, posting this on G+ means that this text is Google’s (note, this was originally published on G+, then I stole it back!). And since it didn’t originate on my personal wordpress installation (free as in speech, free as in beer) running on my server at home (free as in speech, not absurdly expensive as in cheap beer), it isn’t mine.

My server also runs my mail server, my file server, my web server etc. all from my garage meaning that’s my data and my hardware and fully protected by law, while any data on Google’s server is effectively shared with every good and bad government in the world and my only legal recourse if it gets hacked or stolen or sold or given away or simply deleted is to… write an angry post on my blog and swear never to trust a cloud service again.

This is, obviously, exactly the same at FaceBook and every other cloud service. I use Facebook as a syndication service: I post on my own servers and syndicate via RSS to FaceBook, which becomes, in effect, the most frequently used RSS reader should people who haven’t gotten around to blocking me in their streams might find and by which perhaps occasionally be amused. This means I still own my data and my data has no particular dependence on FaceBook’s survival.

This post is visible only as long as Google wants it to be.  If Google changes the rules, I lose the data.  OK, I can download it – as long as they choose to let me, but it isn’t my data. When I post on my server then give FaceBook permission to republish the data, I control my data and they get only what I decide to give them. When I post this on Google and then ask “please, sir, may I recover my post for another use?” the power relationship is reversed: Google owns and controls everything and my rights and usage are only what they deign to offer me.

That almost everyone trusts the billionaire playboys who put king sized beds in their 767 party plane as “do no evilparagons of virtue is odd to me, but nothing better validates Erich Fromm’s thesis than the pseudo-religious idolatry of Google and Apple.  Still, even the True Believers should realize that the founders of these Great Empires are not truly immortal and that even if Google is doing no evil now, it will change hands and those that inherit every search you’ve ever done, every web page you’ve ever visited, every email you’ve ever sent, every phone call you’ve ever made or received, the audio of every message ever left for you, the GPS traces of every step you’ve ever taken, every text and chat and tweet might think, say, that Doing Good means something different than you think it does.  One should also remember the Socratic Paradox that renders tautological Google’s vaunted motto.

Unfortunately, at least so far, Google won’t let me use G+ to syndicate my data – they insist on owning it and dictating the terms by which I can access it. If I want to syndicate content through my G+ network, it seems I have to fully gift Google that content. I’m hoping there’s a tool to populate my “posts” from RSS so the canonical will remain on my server. Because it is the Right Thing To Do.

(Shhhh..  I’m going to copy and paste this into my own wordpress installation, even though I wrote it here on the G+ interface.  They probably won’t send me a DMCA takedown, but I do run the risk that they’ll hit me with a “duplicate content penalty” and set my page rank to 0 thus ensuring nobody ever finds my site again.  Ah, absolute power, so reassuring to remember that it is absolutely incorruptible.)

Posted at 11:47:35 GMT-0700

Category: politicsself-publishingtechnology

# Finding fun TLDs

Wednesday, September 15, 2010

URL shortening services have discovered international TLDs because they aren’t as jammed as .com where every combination of 5 or fewer characters and every word in the English language is registered at a parking site.  Some new ones (t.co) have found pretty good short domains.

I found that RWGUSA.net is a great resource for registering domains around the globe – not only do they let you register all the easy ones in the table below, but they also have a way to register more complicated ones that require in-country support (for a fee).

Then all you need is a wildcard dictionary search and some patience to find a cute short domain.  Check out the list below:

Posted at 19:26:50 GMT-0700

Category: self-publishingtechnology

Tuesday, May 5, 2009

I discovered TwitterFeed and I was happy.  It does a nice job of formatting blog entries to tweets.  I set it up then went back to it later after I changed my login for twitter and whoops.  You can only log in with OpenID.

Uh oh.  OpenID. Why?  Why do this?  It is a solution in search of a problem.  It is very clever and worse than useless.  It must be a support nightmare.  So instead of having my browser automagically insert my passwords (and instead of having my browser’s convenient password store “show passwords” option to help me figure out what they are all in one convenient place) I have to remember some random URL from a totally random company I’ve never heard of, do not have any reason to trust, and would never use for anything else.

Great.

Security!  Plus they use some idiotic picture picker thing instead of a password.  Why?  Why?

These things are great in theory, but worse than useless in practice.

Time to find another blog->twitter tool.  Hello hellotxt.com