Gecko in the bath

Wednesday, January 9, 2013 
Posted at 17:37:53 GMT-0700

Category: photoTravel

Otterboxes for the iPhone and Galaxy S3

Tuesday, January 8, 2013 

There are two things I always do with a new digital device, get a good screen protector and a good case. (And the biggest memory card that will fit).

The screen protector is pretty easy: I’ve used both Zagg and Armor Suit and prefer the Armor Suit, but not by much. Both work really well and I have an Armor Suit on my Motorola Razr V9x (still the best basic cell phone I’ve ever owned) that has lived in my pocket for many, many years without a scratch visible on the outer screen.

For cases I lived with an (almost iconic) yellow Defender case for my Blackberry Bold 9000 for about 5 years.  It was awesome, indestructible, and fit the belt holder perfectly.  Alas, it was no match for a random late night cab ride and early flight out of Dubai–can’t defend against that, can ya? Well, it lasted about 5 years, so no complaints. I contacted Otterbox to see if I could get a replacement silicone bit and they checked and only had 2 belt holsters left in stock from the entire product line.  They mailed me those for free. Thanks Otterbox! (One did come in handy eventually.)

I got an iPod from United and, of course, got an Otterbox for it; one of the Commuter series.  With a polycarbonate outer shell protecting the critical corners, and that backed underneath by a few mm of soft silicone, the iPod is extremely well protected.  This is a well-engineered protection model, far better than just a layer of silicone. (Update 2023: I still use this United 1M mile award iPod)

A corner drop tends to generate very high localized pressure where the corner tries to merge with the hard surface it is being dropped on. Having the polycarbonate outer shell distributes that pressure load over the silicone underneath it resulting in a broad, gentile distribution of the impact load and minimizing the risk of localized overpressure which would crack plastic or glass.

Conversely, simple silicone sleeves without the polycarbonate layer, while adding critical padding and being fairly effective in most cases, can’t distribute the impact load nearly so effectively.  This should not matter too much for a surface-to-surface drop where the impact force is distributed over the whole back or even an edge of the phone, but in a corner drop the silicone can be effectively mushed out of the way as the hard surface attempts touch delicate plastic or glass in a tragic romance.

This outer shell is what distinguishes the Commuter series from Otter’s lower-cost silicone-only Impact series cases, as well as the host of cheap silicone sleeves on the market.

otterbox_iPhone-vs-Galaxy_S_3.jpg

I replaced the Blackberry with a Samsung Galaxy S3 and got a Commuter case for it.  The case is very nice, not too big, but Otterbox did something very, very wrong.  They rotated the polycarbonate tabs 45 degrees, covering the edges and not the corners.  Why Otter, why? The case is still quite nice and it is the nicest looking and most comfortable I’ve found, but this is an odd engineering mistake.  They talk about the “layers of protection” as a key selling point for their more expensive Commuter and Defender series, yet leave the most fragile corners protected by only a single layer.  As protection goes, it is no better than the Impact since the corners are all that really matters.

The polycarbonate shell does serve to anchor the access flaps closed, which is an improvement over the iPod case, but this could easily have been achieved with a few well-placed polycarbonate fingers reaching around the case without making it difficult to assemble (too many fingers wrapping around the device make it impossible to snap the device into the polycarbonate shell).

Further, the textured silicone edges on the iPod case are actually really nice to hold, far more comfortable and slip-resistant than the polycarbonate edges of the S3 case (and make the iPod less likely to drop than the S3 as well).  As an additional bonus, the iPod version exposes some textured silicone on the back surface making the case somewhat non-slip, while the S3 case is all polycarbonate on the back. Without some non-slip silicone on the back, the likelihood that the enclosed device will slip off a sloped surface and onto a hard floor or into a toilet or sink is much greater. While the case makes a disaster far less likely for the former eventuality, it is not waterproof.

While the Android OS just crushes iOS, and the availability of Android-specific tools and applications, particularly for security and encryption, makes it the best choice for a mobile device right now (though security, at least, is even less of a concern with a Blackberry – that’s the one thing Rim still has going for it – that and efficient use of data), Otterbox really could have done a better job with the case.   Hopefully the S4 case will get it right.


 

Update

It has been almost 2 years and I’ve been carrying the Otterbox-protected S3 more or less continuously since in a relatively active and somewhat unforgiving environment, not that anyone’s pocket or purse would fail to meet that definition.  A few issues emerged:

  • The rubber flap covering the USB port, which you need to access at least twice a day for charging, tore off very early on;
  • I change SIMS a few times a month and the case doesn’t really like being taken on and off and eventually cracked in two places, but it still holds together;
  • The unprotected silicone covering the corners began to deteriorate fairly quickly, as I predicted, and one corner has disintegrated completely, leaving that most fragile of impact points unprotected.

Failed corner of the otterbox case

I’d probably buy another – two years is a pretty good life (but not as good as the 5 my blackberry gave me.  I still miss that phone).  I wish Otterbox would focus on protecting the corners, not the edges. The iPod case, far less heavily used but equally traveled shows no wear on the corners at all and provides the same protection it did two years ago.  It is a better design.

Update 2023, ten years later

The S3 is long gone and the case with it.  It was already disintegrating starting with the exposed corners I didn’t like when I got it back in 2012.  The iPod case?  Still on that iPod which is still working.  More than 2 million miles, just on United iron, and still going strong.

Posted at 12:36:41 GMT-0700

Category: Cell phonesNeutralphotoReviewsTechnology

Futurama is Awesome

Saturday, January 5, 2013 

I learned two things about Futurama recently which added to my already deep appreciation for the show. The first is that the theme song came from a very cool song by Pierre Henry called Psyche Rock from 1967, which is on youtube. It was remixed by Fatboy Slim in an appealing way.

But what was most interesting recently was to see episode 10 of season 6, the Prisoner of Benda, a spoof of the Prisoner of Zelda but including what may be the first tv-episode publication of the proof of a relatively complex mathematical theorem in group theory as a core plot element.

Prisoner_of_Benda_Theorem_on_Chalkboard.png

The problem in the plot is that the Professor’s mind swapping machine creates an immune response which prevents swapping back in one step. So how do you get everyone back to into their original bodies? Well, as Sweet Clyde says, it takes at most two extra players [who haven’t swapped yet]. As the entire cast, including the robo-bucket, have swapped bodies, the situation is pretty complex, but fortunately one of the show’s writers, Ken Keeler, has a PhD in applied mathematics from Harvard and found a proof, which is actually shown in the show (above), and then worked in a fast montage that restores everyone.

In the following table, the heading shows the character name of the body, row 0 shows the occupant of that body by the end of the plot’s permutations and before the globetrotters start the transformations. Rows 1-7 show the steps to restore everyone to their original bodies.  Each transformation was animated as a pair using the two “extra players” except the last rotation to restore Sweet Clyde and the Bucket.

Posted at 16:34:18 GMT-0700

Category: FilmsPositiveReviewsTechnology

cyrus-sasl-saslauthd-2.1.26 auth_krb5.c compile error

Saturday, January 5, 2013 

Upgrading cyrus-sasl-saslauthd-2.1.25 to the current cyrus-sasl-saslauthd-2.1.26, I started to get auth_krb5.c compile errors that were terminating the compile like:

<command-line>: warning: this is the location of the previous definition
mv -f .deps/auth_getpwent.Tpo .deps/auth_getpwent.Po
cc -DHAVE_CONFIG_H
-DSASLAUTHD_CONF_FILE_DEFAULT=\"/usr/local/etc/saslauthd.conf\" -I. -I.
-I.. -I. -I./include -I./include -I./../include   -I/usr/local/include
-DKRB5_HEIMDAL -I/usr/local/include  -O3 -pipe -march=native
-DLDAP_DEPRECATED -fno-strict-aliasing -MT auth_krb5.o -MD -MP -MF
.deps/auth_krb5.Tpo -c -o auth_krb5.o auth_krb5.c
In file included from mechanisms.h:35,
                 from auth_krb5.c:51:
saslauthd.h:190:1: warning: "KRB5_HEIMDAL" redefined
<command-line>: warning: this is the location of the previous definition
auth_krb5.c: In function 'auth_krb5_init':
auth_krb5.c:105: warning: assignment discards qualifiers from pointer
target type
auth_krb5.c:106: warning: assignment discards qualifiers from pointer
target type
auth_krb5.c: In function 'auth_krb5':
auth_krb5.c:184: error: 'krb5_verify_opt' undeclared (first use in this
function)
auth_krb5.c:184: error: (Each undeclared identifier is reported only once
auth_krb5.c:184: error: for each function it appears in.)
auth_krb5.c:184: error: expected ';' before 'opt'
auth_krb5.c:233: error: 'opt' undeclared (first use in this function)
*** Error code 1

Stop in
/usr/ports/security/cyrus-sasl2-saslauthd/work/cyrus-sasl-2.1.26/saslauthd.
*** Error code 1

Stop in
/usr/ports/security/cyrus-sasl2-saslauthd/work/cyrus-sasl-2.1.26/saslauthd.
*** Error code 1

Stop in /usr/ports/security/cyrus-sasl2-saslauthd.

with some expert advice from the port maintainer, Hajimu UMEMOTO (what is not to love about BSD and open source?  Something goes wrong, the guy who knows everything about it tells you how to fix it right away).   He correctly ascertained that I had security/krb5 installed, a dependency of  openssh-portable.  Kerberos, HEIMDAL and GSSAPI occasionally have interactions, but his advice was to make with the directive KRB5_HOME=/usr/local. I put this into /etc/make.conf to make it permanent, deinstall/reinstalled security/krb5 and then cyrus-sasl-2.1.26 compiled perfectly.

Thanks Mr Umemoto!

Posted at 13:41:23 GMT-0700

Category: FreeBSDTechnology

Overthrow the Cert Mafia!

Friday, January 4, 2013 

The certificate system is badly broken on a couple of levels and the most recent revelation that Turktrust accidentally issued two intermediate SSL CAs which enabled the recipients to issue presumptively valid arbitrary certificates. This is just the most recent (probably the most recent, this seems to happen a lot) compromise in a disastrously flawed system including the recent Diginotar and Comodo attacks. There are 650 root CAs that can issue certs, including some CA‘s operated by governments with potentially conflicting political interests or poor human rights records and your browser probably trusts most or all completely by default.

It is useful to think about what we use SSL certs for:

  • Establishing an encrypted link between our network client and a remote server to foil eavesdropping and surveillance.
  • To verify that the remote server is who we believe it to be.

Encryption is by far the most important, so much more important than verification that verification is almost irrelevant, and fundamental flaws with verification in the current CA system make even trying to enforce verification almost pointless. Most users have no idea what what any of the cryptic (no pun intended) and increasingly annoying alerts warning of “unvalidated certs” mean or even what SSL is.

Google recently started rejecting self-signed certs when attempting to establish an SSL encrypted POP connection via Gmail, an idiotically counterproductive move that will only make the internet less secure by forcing individual mail servers to connect unencrypted. And this is from the company who’s cert management between their round-robin servers is a total nightmare and there’s no practical way to ever be sure if a connection has been MITMed or not as certs come randomly from any number of registrars and change constantly.
cert_stupidity_google_perspectives.JPG
What I find most annoying is that the extraordinary protective value of SSL encrypted communication is systematically undermined by browsers like Firefox in an intrinsically useless effort to convince users to care about verification. I have never, not once, ever not clicked through SSL warnings. And even though I often access web sites from areas that are suspected of occasionally attempting to infiltrate dissident organizations with MITM attacks, I still have yet to see a legit MITM attack in the wild myself. But I do know for sure that without SSL encryption my passwords would be compromised. Encryption really matters and is really important to keeping communication secure; anything that adds friction to encryption should be rejected. Verification would be nice if it worked, but don’t add friction to encryption.

no secure encryption unless you pay the cert mafia

Self-signed certs and community verified certs (like CAcert.org) should be accepted without any warnings that might slow down a user at all so that all websites, even non-commercial or personal ones, have as little disincentive to adding encryption as possible. HTTPSEverywhere, damnit. Routers should be configured to block non-SSL traffic (and HTML email, but that’s another rant. Get off my lawn.)

Verification is unsolvable with SSL certs for a couple of reason, some due to the current model, some due to reasonable human behavior, some due to relatively legitimate law-enforcement concerns, but mostly because absolute remote verification is probably an intractable problem.

Akamai certs error har har.JPG

Even at a well run notary, human error is likely to occur. A simple typo can, because registrar certs are by default trusted globally, compromise anyone in the world. One simple mistake and everybody is at risk. Pinning does not actually reduce this risk as breaks have so far been from generally well regarded notaries, though rapid response to discovered breaches can limit the damage. Tools like Convergence, Perspectives, and CrossBear could mitigate the problem, but only if they have sufficiently few false positives that people pay attention to the warnings and are built in by default.

But even if issuance were somehow fixed with teams of on-the-ground inspectors and biometrics and colonoscopies, it wouldn’t necessarily help. Most people would happily click through to www.bankomerica.com without thinking twice. Indeed, as companies may have purchased almost every spelling variation and point them all toward their “most reasonable” domain name, it isn’t unreasonable to do so. If bankomerica.com asked for a cert in Ubeki-beki-beki-stan-stan, would they (or even should they) be denied? No – valid green bar, invalid site. Even if misdirections were non-SSL encrypted, it isn’t practical to typo-test every legit URL against every possible fake, and the vast majority of users would never notice if their usual bank site came up unencrypted one day with a DNS attack to a site not even pretending to fake a cert (in fact, studies suggest that no users would notice). This user limitation fundamentally obviates the value of certs for identifying sites. But even a typo-misdirection is assuming too much of the user – all of my phishing spam uses brand names in anchortext leading to completely random URLs, rarely even reflective of the cover story, and the volume of such spam suggests this is a perfectly viable attack. Verification attacks don’t even need to go to a vaguely similar domain let alone go to all the trouble of attacking SSL.

cert_stupidity_google.JPG

One would hope that dissidents or political activists in democracy challenged environments that may be subject to MITM attacks might actually pay attention to cert errors or use perspectives, convergence, or crossbear. User education should help, but in the end you can’t really solve the stupid user problem with technology. If people will send bank details to Nigeria so that a nationality abandoned astronaut can expatriate his back pay, there is no way to educate them on the difference between https://www.bankofamerica.com and http://www.bankomerica.com. The only useful path is to SSL encrypt all sites and try to verify them via a distributed trust mechanism as implemented by GPG (explicit chain of trust), Perspectives (wisdom of the masses), or Convergence (consensus of representatives); all of these seem infinitely more reliable than trusting any certificate registry, whether national or commercial and as a bonus they escape the cert mafia by obviating the need for a central authority and the overhead entailed; but this only works if these tools have more valid positives than false positives, which is currently far from the case.

cert_stupidity_google_cross_bear.JPG

Further, law enforcement makes plausible arguments for requiring invisible access to communication. Ignoring the problematic but understandable preference for push-button access without review and presuming that sufficient legal barriers are in place to ensure such capabilities protect the innocent and are only used for good, it is not rational to believe that law enforcement will elect to give up on demanding lawful intercept capabilities wherever possible. Such intercept is currently enabled by law enforcement certificates which permit authorized MITM attacks to capture encrypted data without tipping off the target of the investigation. Of course, if the US has the tool, every other country wants it too. Sooner or later, even with the best vetting, there is a regime change and control of such tools falls into nefarious hands (much like any data you entrust to a cloud service will sooner or later be sold off in an asset auction to whoever can scrape some residual value out of your data under whatever terms suit them, but that too is a different rant). Thus it is not reasonable for activists in democracy challenged environments to assume that SSL certs are a secure way to ensure their data is not being surveilled. Changing the model from intrinsic, automatic trust of authority to a web-of-trust model would substantially mitigate the risk of lawful intercept certs falling into the wrong hands, though also making such certs useless or far harder to implement.

There is no perfect answer to verification because remote authentication is Really Hard. You have to trust someone as a proxy and the current model is to trust all or most of the random, faceless, profit or nefarious motive driven certificate authorities. Where verification cannot be quickly made and is essential to security, out of band verification is the only effective mechanism such as transmitting a hash or fingerprint of the target’s cryptographic certificate via voice or postal mail or perhaps via public key cryptography.

Sadly, the effort to prop up SSL as a verification mechanism has been made at the compromise of widespread, low friction encryption. False security is being promoted at the expense of real security.

That’s just stupid.

Posted at 15:18:25 GMT-0700

Category: PrivacySecurityTechnology

Google APIs Suck

Friday, January 4, 2013 

Off-Site scripts are annoying and privacy invasive. They are a vector for malware, waste your computer’s resources, and generally add limited capability.  They’re a shortcut for developers but rarely add real value that can’t be replaced by locally-hosted, open-source scripts and always compromise your privacy (or the privacy of your site’s visitors).

To explain – I use noscript (as everyone should) with Firefox (it doesn’t work with Chrome: I might consider trusting Google’s browser for some mainstream websites when it does, but I don’t really like that Chrome logs every keystroke back to Google and I’m not sure why anyone would tolerate that).  NoScript enables me to give per-site permission to execute scripts.

The best sites don’t need any scripts to give me the information I need.  It is OK if the whizzy experience is degraded somewhat for security’s sake, as long as that is my choice. Offsite scripting can add useful functionality, but the visitor should be able to opt out.

Most sites use offsite scripting for privacy invasion – generally they have made a deal with some heinous data aggregator who’s business model is to compile dossiers of every petty interest and quirk you might personally have and sell them to whoever can make money off them: advertisers, insurance companies, potential employers, national governments, anyone who can pay.  In return for letting them scrounge your data off the site, they give the site operator some slick graphs (and who doesn’t love slick graphs). But you lose.  Or you block google analytics with noscript.  This was easy – block offsite scripts if you’re not using private browsing or switch to private browsing (and Chrome’s private browsing mode is probably fine) and enjoy the fully scripted experience.

But I’ve noticed recently a lot of sites are borrowing basic functionality from Google APIs.  Simple things, for which there are plenty of open source scripts to use like uploading images – this basic functionality is being sold to them in an easy to integrate form in exchange for your personal information: in effect, you’re paying for their code with your privacy. And you either have to temporarily allow Google APIs to execute scripts in your browser and suck up your personal information or you can’t use the site.

If you manage a website, remove as many calls as you can, including removing calls back to wordpress and fonts.  These are all data collection mechanisms that seem to make it easy in exchange for aggregating data on users.  I recommend three browser plugins to significantly improve privacy and reduce data collection.  They break some sites, but those sites are so privacy violating that you shouldn’t be visiting them anyway.

LocalCDN

Local CDN redirects CDN calls to locally cached copies, which improves performance and protects privacy.  CDNs make good money off your private data without your consent and the features they provide are easily replaced with local delivery.  This seems to have zero impact on browsing experience.

For firefox, you might try Decentraleyes.

Privacy Badger

EFF’s privacy badger is great.  It can be your only ad blocker if you, say, support ad-monetized content but just don’t want to be tracked.  EFF’s goal isn’t so much to end advertising but to give the user a tool to reject the more privacy invasive elements of such advertising or other mechanisms of tracking.  The “learning” mode is disabled by default because using it is, itself, trackable.

uBlock Origin

The ur-privacy plugin, uBlock Origin is by default fairly agressive in blocking and so not only protects privacy, but blocks scripts that slow your computer down, waste your costly energy doing free work for advertisers, and speeds up browsing.  It does, however, break some pages including things like logins and redirects, so become familiar with the mechanisms for selectively disabling blocking of scripts or sites that are important.

Posted at 07:34:36 GMT-0700

Category: PoliticsPrivacySecurityTechnology

openldap-server-2.4.33_2

Thursday, January 3, 2013 

With FreeBSD 9.1 out, it is time get all my ports upgraded in advance of doing the OS update.  The process is fairly painless, but occasionally, especially if you are slacking in the updates, a change in configuration causes the usually completely automatic “portupgrade -ra” to fail.

One such update was “Upgrading 'openldap-sasl-server-2.4.31' to 'openldap-server-2.4.33_2” which failed with a

===>  openldap-server-2.4.33_2 conflicts with installed package(s):
      openldap-sasl-client-2.4.33_1

      They install files into the same place.
      You may want to stop build with Ctrl + C.
===>  License OPENLDAP accepted by the user
===>  Found saved configuration for openldap-server-2.4.33

===>  openldap-server-2.4.33_2 conflicts with installed package(s):
      openldap-sasl-client-2.4.33_1

      They will not build together.
      Please remove them first with pkg_delete(1).
*** Error code 1

Stop in /usr/ports/net/openldap24-server.

But because this is FreeBSD and the open source community actually provides support, unlike, say Microsoft, where such an error would languish for months, if not years, with out a patch or any advice on how to fix it, the port maintainer, Xin Li, answered my question in less than 20 minutes with the following advice:

cd /usr/ports/net/openldap24-server
make config

Check “SASL” is checked?

Following his directions, everything compiled perfectly.

Posted at 15:49:42 GMT-0700

Category: FreeBSDHowTo