I got a cheap Android TV box off Amazon a while back and consolidated and it ended up being spare. One of the problems with these TV boxes is they don’t get OS updates after release and soon video apps stop working because of annoying DRM mods. I figured I’d try to switch one to Linux and see how it went. There are some complications in that these are ARM devices (not x86/x64) and getting ADB to connect is slightly complicated due to it not being a USB device. There are a few variations of the “ELEC” (Embedded Linux Entertainment Center), OpenELEC, LibreELEC, and CoreELEC (at least). CoreELEC focuses on ARM boxes and makes sense for the cheapest boxes.
What you get is a little linux-based device that boots to Kodi, runs an SMB server, and provides SSH access, basically an open source SmartTV that doesn’t spy on you. It is well suited to a media library you own, rather than rent, such as your DVD/Blu-Ray collection. Perhaps physical media will come back into vogue as the rent-to-watch streaming world fractures into a million individual subscriptions. Once moved to the FOSS world, there are regular updates and, generally, people committed to keeping things working and far better and far longer term support than OEM’s offer for random cheap android devices (or, even quite expensive ones).
A: Download the CoreELEC image:
You need to know the processor and RAM of your device – note that a lot of different brands are the same physical boxes. My “CoCoq M9C Pro” (not listed) is also sold as a “Bqeel M9C Pro” (listed) and the SOC (S905X) and RAM (1G) match, so the Bqeel image worked fine. The download helper at the bottom of the https://coreelec.org/ site got me the right file easily.
B: Image a uSD or USB device for your wee box:
I’m running Linux as my desktop so I used the LibreELEC Creator Tool and just selected the file downloaded previously. That part went easily. The only issue was that after imaging, Linux didn’t recognize the device and I had to
systemctl --user restart gvfs-udisks2-volume-monitor before I could do the next step.
C: One manual file move for bootability:
ELEC systems use a “device tree” – a file with hardware descriptors that are device specific to get (most) of the random cheap peripherals working on these cheap boxes. You have to find the right one in the /Device Trees folder on your newly formatted uSD/USB device and copy it to the root directory and rename it
D: Awesome, now what?
Once you have your configured formatted bootable uSD/USB you have to get your box to boot off it. This is a bit harder without power/volume control buttons normally on Android devices. I have ADB set up to strip bloatware from phones. You just need one command to get this to work, reboot update, but to get it to your box is a bit of a trick.
Most of these boxes are supplied rooted, so you don’t need to do that, but you do need a command line tool and to enable ADB.
To enable ADB: navigate to the build number (usually Settings > System > About phone > Build number) and click it a bunch of times until it says “you’re a developer!” Then go to developer options and enable ADB. You can’t use it yet though because there’s no USB.
Install a Terminal Emulator for Android and enable ADB over TCP
su setprop service.adb.tcp.port 5555 stop adbd start adbd
Then check your IP with
ifconfig and on your desktop computer run
adb connect dev.ip.add.ress:5555
(dev.ip.add.ress is the IP address of your device – like 192.168.100.46, but yours will be different)
adb reboot update from the desktop computer and the box should reboot then come up in CoreELEC.
Going back to android is as easy as disconnecting the boot media and rebooting, the android OS is still on the device.
Issue: WIFI support.
Many of these cheap devices use the cheapest WIFI chips they can and often the MFGs won’t share details with FOSS developers, so driver support is weak. Bummer, but the boxes have wired connections and wired works infinitely better anyway: it isn’t a phone, it’s attached to a TV that’s not moving around, run the damn wire and get a stable, reliable connection. If that’s not possible check the WIFI chips before buying or get a decent USB-WIFI adapter that is supported.
Using WebP coded images inside SVG containers works. I haven’t found any automatic way to do it, but it is easy enough manually and results in very efficiently coded images that work well on the internets. The manual process is to Base64 encode the WebP image and then open the .svg file in a text editor and replace the
(“…” means the appropriate data, obviously).
Back in about 2010 Google released the spec for WebP, an image compression format that provides a roughly 2-4x coding efficiency over the venerable JPEG (vintage 1974), derived from the VP8 CODEC they bought from ON2. VP8 is a contemporary of and technical equivalent to H.264 and was developed during a rush of innovation to replace the aging MPEG-II standard that included Theora and Dirac. Both VP8 and H.264 CODECs are encumbered by patents, but Google granted an irrevocable license to all patents, making it “open,” while H.264s patents compel licensing from MPEG-LA. One would think this would tend to make VP8 (and the WEBM container) a global standard, but Apple refused to give Google the win and there’s still no native support in Apple products.
A small aside on video and still coding techniques.
All modern “lossy” (throwing some data away like .mp3, as opposed to “lossless” meaning the original can be reconstructed exactly, as in .flac) CODECs are founded on either Discrete Cosine Transform (DCT) or Wavelet (DWT) encoding of “blocks” of image data. There are far more detailed write ups online that explain the process in detail, but the basic concept is to divide an image into small tiles of data then apply a mathematical function that converts that data into a form which sorts the information from least human-perceptible to most human-perceptible and sets some threshold for throwing away the least important data while leaving the bits that are most important to human perception.
Wavelets are promising, but never really took off, as in JPEG2000 and Dirac (which was developed by the BBC). It is a fairly safe bet that any video or still image you see is DCT coded thanks to Nasir Ahmed, T. Natarajan and K. R. Rao. The differences between 1993’s MPEG-1 and 2013’s H.265 are largely around how the data that is perceptually important is encoded in each still (intra-frame coding) and some very important innovations in inter-frame coding that aren’t relevant to still images.
It is the application of these clever intra-frame perceptual data compression techniques that is most relevant to the coding efficiency difference between JPEG and WebP.
Back to the good stuff…
Way back in 2010 Google experimented with the VP8 intra-coding techniques to develop WebP, a still image CODEC that had to have two core features:
- better coding efficiency than JPEG,
- ability to handle transparency like .png or .tiff.
This could be the one standard image coding technique to rule them all – from icons to gigapixel images, all the necessary features and many times better coding efficiency than the rest. Who wouldn’t love that?
Of course it was Apple. Can’t let Google have the win. But, finally, with Safari 14 (June 22, 2020 – a decade late!) iOS users can finally see WebP images and websites don’t need crazy auto-detect 1974 tech substitution tricks. Good job Apple!
It may not be a coincidence that Apple has just released their own still image format based on the intra-frame coding magic of H.265, .heif and maybe they thought it might be a good idea to suddenly pretend to be an open player rather than a walled-garden-screw-you lest iOS insta-users wall themselves off from the 90% of the world that isn’t willing to pay double to pose with a fashionable icon in their hands. Not surprisingly, .heic, based on H.265 developments is meaningfully more efficient than WebP based on VP8/H.264 era techniques, but as it took WebP 10 years to become a usable web standard, I wouldn’t count on .heic having universal support soon. I expect Google is hard at work on a VP9-based version of WebP right now. Technology is fast, marketing is slow. Oh well. In the mean time….
SVG support in browsers is a much older thing – Apple embraced it early (SVG was not developed by Google so….) and basically everything but IE has full support (IE… the tool you use to download a real browser). So if we have SVG and WebP, why not both together?
Oddly I can’t find support for this in any of the tools I use, but as noted at the open, it is pretty easy. The workflow I use is to:
- generate a graphic in GIMP or Photoshop or whatever and save as .png or .jpg as appropriate to the image content with little compression (high image quality)
- Combine that with graphics in Inkscape.
- If the graphics include type, convert the type to SVG paths to avoid font availability problems or having to download a font file before rendering the text or having it render randomly.
- Save the file (as .svg, the native format of Inkscape)
- Convert the image file to WebP with a reasonable tool like Nomacs or Ifranview.
- Base64 encode the image file, either with base64
# base64 infile.webp > outfile.webp.b64or with this convenient site
- If you use the command line option the prefix to the data is “
- Replace the … on the appropriate
xlink:href="...."with the new data using a text editor like Atom.
- Drop the file on a browser page to see if it works.
The picture is 101.9kB and tolerates zoom quite well. (give it a try, click and zoom on the image).
Some of the coding tricks in H.265 have been incorporated into MPEG-H coding, an ISO standard introduced in 2017, which yields a roughly 2:1 coding efficiency gain over the venerable JPEG, which was introduced in 1992. Remember that? I do; I’m old. I remember having a hardware NUBUS JPEG decoder card. One of the reasons JPEG has lasted so long is that images have become a small storage burden (compared to 4k video, say) and that changing format standards is extremely annoying to everyone.
Apple has elected to make every rational person’s life difficult and put a little barbed wire around their high-fashion walled garden and do something a little special with their brand of a HEVC (h.265) profile for images. Now normally seeing iOS user’s insta images of how fashionable they are isn’t really worth the effort, but now and then a useful correspondent joins the cult and forks over a ton of money to show off a logo and starts sending you stuff in their special proprietary format. Annoying, but fixable.
$ sudo add-apt-repository ppa:jakar/qt-heif $ sudo apt update $ sudo apt install qt-heif-image-plugin
The Waterfox PPA changed recently. The following let me update from 56.2.8 to 56.2.10 (between which the old PPA was removed).
First remove the old hawkeye PPA from your sources list, then:
echo "deb http://download.opensuse.org/repositories/home:/hawkeye116477:/waterfox/xUbuntu_18.04/ /" | sudo tee -a /etc/apt/sources.list wget -nv https://download.opensuse.org/repositories/home:hawkeye116477:waterfox/xUbuntu_18.04/Release.key -O Release.key sudo apt-key add - < Release.key sudo apt update sudo apt upgrade
Note Ubuntu 18.04 = Mint 19/19.1 the 18.10 deb fails.
I found myself having odd problems connecting to WPA2 encrypted wireless networks with a new laptop. There must be more elegant solutions to this problem, but this worked for me. The problem was that I couldn’t connect to a nearby hotspot secured with WPA2 whether I used the default config tool for mint, Wicd Network Manager, or the command line. Errors were either “bad password” or the more detailed errors below.
As with any system variation mileage may vary, my errors look like:
wlan0: CTRL-EVENT-SCAN-STARTED wlan0: SME: Trying to authenticate with 68:72:51:00:26:26 (SSID='WA-bullet' freq=2462 MHz) wlan0: Trying to associate with 68:72:51:00:26:26 (SSID='WA-bullet' freq=2462 MHz) wlan0: Associated with 68:72:51:00:26:26 wlan0: CTRL-EVENT-DISCONNECTED bssid=68:72:51:00:26:26 reason=3 locally_generated=1
and my system config is reported as:
# lspci -vv |grep -i wireless 3e:00.0 Network controller: Intel Corporation Wireless 7260 (rev 6b) Subsystem: Intel Corporation Dual Band Wireless-AC 7260 # uname -a Linux dgzb 3.16.0-38-generic #52~14.04.1-Ubuntu SMP Fri May 8 09:43:57 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
The following successfully connects to a WPA2-secured network:
$ sudo su $ iw dev ... Interface [interfacename] (typically wlan0, assumed below) $ iw wlan0 scan ... SSID: [ssid] ... RSN: (if present means the network is secured with WPA2) $ wpa_passphrase [ssid] >> /etc/wpa_supplicant.conf ...type in the passphrase for network [ssid] and hit enter... $ sh -c 'modprobe -r iwlwifi && modprobe iwlwifi 11n_disable=1' $ wpa_supplicant -i wlan0 -c /etc/wpa_supplicant.conf
(open a new terminal leaving the connection open, ending the command disconnects)
$ sudo su $ dhclient wlan0
(should be connected now)
Mosh is a pretty good tool, almost indispensable when working in places with crappy internet. While it is designed to help with situations like “LTE on the beach,” it actually works very well in places where internet connectivity is genuinely bad: 1500msec RT, latency, 30% packet loss, and frequent drops in connectivity that last seconds to hours, otherwise known as most of the world. On a good day I lose an SSH connection randomly about every 3-6 hours but I’ve only ever lost a Mosh session when my system went down.
It does a lot of things, but two are key for my use: it syncs user input in the background while local echoing what you type so you can finish your command (and correct a typo) without waiting 1500msec for the remote echo to update; and it creates persistent connections that survive drop off of almost any type except killing the terminal application on one end or the other (anything between can die and when it recovers, you catch up). This means compiles finish and you actually get the output warnings…
…some of them. Because Mosh’s one giant, glaring, painful, almost debilitating weakness is that it doesn’t support scrollback. So compared to tmux or something else that you can reconnect to after your SSH session drops, you really lose screen content, which is a PITA when ls-ing a directory. I mean, it isn’t that much of an efficiency gain to have to type “ls | less” instead of just “ls” every time you want to see a directory.
I found a solution that works for me. I also use Tmux with Mosh because Tmux will survive a dead client and working with Windows client reboots are a fact of life (I know, sad, but there are some tools I still need on windows, hopefully not for much longer).
Tmux has a facility for creating a local log file, which I then “tail -f” using a separate SSH window. If the SSH client disconnects, no loss, I can pick up the log anytime. It is just mirroring everything that the mosh terminal is doing and the scroll bar scroll back works fine. And it is a raw text file, so you can pipe the output through grep to limit what’s displayed to something of interest and review the log asynchronously as, say, a build is progressing.
Although there are some nice advantages to this, when/if Mosh supports scrollback, it’ll be far more convenient having it in the same window, but for now this is the easiest solution I could come up with.
# portmaster sysutils/tmux # portmaster net/mosh # ee ~/.tmux.conf -> bind-key H pipe-pane -o "exec cat >>$HOME/'#W-tmux.log'" \; display-message 'Logging enabled to $HOME/#W-tmux.log' -> set -g history-limit 30000 Start a Mosh session (for example with Mobaxterm on windows) # tmux # [CTRL]-b H start SSH session (Mobaxterm or Putty on windows) # tail -f csh-tmux.log ("csh" will be the name of the mosh window - so really "(MoshWindowName)-tmux.log"
You can tmux the ssh session too and still have scrollback and then just reconnect into the same tail command, which preserves the whole scrollback. If you’re on a connection like I’m on, your scrollback logfile will drop off a couple of times a day, but you won’t lose your Mosh session, and it’ll be waiting for you when you’re reminded that you need to see those security warnings from the compile that just scrolled off the Mosh screen forever.