Yes, how could you not be sure that when somebody offers to host your personal data for free on their servers that nothing could possib-lie go wrong. Uh, PossibLY go wrong.
Sunday, March 17, 2013
Sunday, March 3, 2013
I’m not sure who decides what apps are blocked on a country by country basis, but an awful lot of apps are blocked in Iraq and it seems like more and more.
OTT apps like Whatsapp and Viber sort of make sense. These apps are at war with the carriers, who claim the app is making money somehow on the backs of the carriers*, and they seem to be largely blocked from install in Iraq. One would imagine that was Asiacell’s doing, but I changed SIMs and that didn’t help.
But then I noticed that weird apps like Angry Birds are not allowed in Iraq—apps that makes no sense for a carrier to block. The advertising model actually works and ad-supported apps show (some) relevant, regional ads, as they should, in theory generating at least some revenue for the developers. Part of the problem may be that there’s no way for in-app payments to be processed out of Iraq and therefore developers of even “freemium” apps may choose to block their apps in the country reasoning that if they can’t make money, why let people use the app?
If so, it seems short sighted: ultimately payment processing will be worked out and even if it isn’t, Iraqis are allowed to travel to countries where in-app payments do work. Establishing a beachhead in the market, even without revenue seems prudent. Blocking users who represent neither revenue nor cost seems arbitrarily punitive.
* The carrier’s business should be to transport bits agnostically. They have no business caring what we do with our bits; no bit costs more than any other bit to carry. If they can’t figure out how to make money carrying bits, they have no business being in the bit carrying business. When they whine about a business like WhatsApp or Viber or Free Conference Call or Skype or Google hurting their profits what they really mean is that these new businesses have obviated a parasitic business that was profitable due to a de facto monopoly over what people could do with their bit carrying business.
Tuesday, January 8, 2013
There are two things I always do with a new digital device, get a good screen protector and a good case. (And the biggest memory card that will fit).
The screen protector is pretty easy: I’ve used both Zagg and Armor Suit and prefer the Armor Suit, but not by much. Both work really well and I have an Armor Suit on my Motorola Razr V9x (still the best basic cell phone I’ve ever owned) that has lived in my pocket for many, many years without a scratch visible on the outer screen.
For cases I lived with an (almost iconic) yellow Defender case for my Blackberry Bold 9000 for about 5 years. It was awesome, indestructible, and fit the belt holder perfectly. Alas, it was no match for a random late night cab ride and early flight out of Dubai–can’t defend against that, can ya? Well, it lasted about 5 years, so no complaints. I contacted Otterbox to see if I could get a replacement silicone bit and they checked and only had 2 belt holsters left in stock from the entire product line. They mailed me those for free. Thanks Otterbox! (One did come in handy eventually.)
I got an iPod from United and, of course, got an Otterbox for it; one of the Commuter series. With a polycarbonate outer shell protecting the critical corners, and that backed underneath by a few mm of soft silicone, the iPod is extremely well protected. This is a well-engineered protection model, far better than just a layer of silicone.
A corner drop tends to generate very high localized pressure where the corner tries to merge with the hard surface it is being dropped on. Having the polycarbonate outer shell distributes that pressure load over the silicone underneath it resulting in a broad, gentile distribution of the impact load and minimizing the risk of localized overpressure which would crack plastic or glass.
Conversely, simple silicone sleeves without the polycarbonate layer, while adding critical padding and being fairly effective in most cases, can’t distribute the impact load nearly so effectively. This should not matter too much for a surface-to-surface drop where the impact force is distributed over the whole back or even an edge of the phone, but in a corner drop the silicone can be effectively mushed out of the way as the hard surface attempts touch delicate plastic or glass in a tragic romance.
I replaced the Blackberry with a Samsung Galaxy S3 and got a Commuter case for it. The case is very nice, not too big, but Otterbox did something very, very wrong. They rotated the polycarbonate tabs 45 degrees, covering the edges and not the corners. Why Otter, why? The case is still quite nice and it is the nicest looking and most comfortable I’ve found, but this is an odd engineering mistake. They talk about the “layers of protection” as a key selling point for their more expensive Commuter and Defender series, yet leave the most fragile corners protected by only a single layer. As protection goes, it is no better than the Impact since the corners are all that really matters.
The polycarbonate shell does serve to anchor the access flaps closed, which is an improvement over the iPod case, but this could easily have been achieved with a few well-placed polycarbonate fingers reaching around the case without making it difficult to assemble (too many fingers wrapping around the device make it impossible to snap the device into the polycarbonate shell).
Further, the textured silicone edges on the iPod case are actually really nice to hold, far more comfortable and slip-resistant than the polycarbonate edges of the S3 case (and make the iPod less likely to drop than the S3 as well). As an additional bonus, the iPod version exposes some textured silicone on the back surface making the case somewhat non-slip, while the S3 case is all polycarbonate on the back. Without some non-slip silicone on the back, the likelihood that the enclosed device will slip off a sloped surface and onto a hard floor or into a toilet or sink is much greater. While the case makes a disaster far less likely for the former eventuality, it is not waterproof.
While the Android OS just crushes iOS, and the availability of Android-specific tools and applications, particularly for security and encryption, makes it the best choice for a mobile device right now (though security, at least, is even less of a concern with a Blackberry – that’s the one thing Rim still has going for it – that and efficient use of data), Otterbox really could have done a better job with the case. Hopefully the S4 case will get it right.
Saturday, January 5, 2013
I learned two things about Futurama recently which added to my already deep appreciation for the show. The first is that the theme song came from a very cool song by Pierre Henry called Psyche Rock from 1967, which is on youtube. It was remixed by Fatboy Slim in an appealing way.
But what was most interesting recently was to see episode 10 of season 6, the Prisoner of Benda, a spoof of the Prisoner of Zelda but including what may be the first tv-episode publication of the proof of a relatively complex mathematical theorem in group theory as a core plot element.
The problem in the plot is that the Professor’s mind swapping machine creates an immune response which prevents swapping back in one step. So how do you get everyone back to into their original bodies? Well, as Sweet Clyde says, it takes at most two extra players [who haven't swapped yet]. As the entire cast, including the robo-bucket, have swapped bodies, the situation is pretty complex, but fortunately one of the show’s writers, Ken Keeler, has a PhD in applied mathematics from Harvard and found a proof, which is actually shown in the show (above), and then worked in a fast montage that restores everyone.
In the following table, the heading shows the character name of the body, row 0 shows the occupant of that body by the end of the plot’s permutations and before the globetrotters start the transformations. Rows 1-7 show the steps to restore everyone to their original bodies. Each transformation was animated as a pair using the two “extra players” except the last rotation to restore Sweet Clyde and the Bucket.
Saturday, January 5, 2013
Upgrading cyrus-sasl-saslauthd-2.1.25 to the current cyrus-sasl-saslauthd-2.1.26, I started to get auth_krb5.c compile errors that were terminating the compile like:
<command-line>: warning: this is the location of the previous definition mv -f .deps/auth_getpwent.Tpo .deps/auth_getpwent.Po cc -DHAVE_CONFIG_H -DSASLAUTHD_CONF_FILE_DEFAULT=\"/usr/local/etc/saslauthd.conf\" -I. -I. -I.. -I. -I./include -I./include -I./../include -I/usr/local/include -DKRB5_HEIMDAL -I/usr/local/include -O3 -pipe -march=native -DLDAP_DEPRECATED -fno-strict-aliasing -MT auth_krb5.o -MD -MP -MF .deps/auth_krb5.Tpo -c -o auth_krb5.o auth_krb5.c In file included from mechanisms.h:35, from auth_krb5.c:51: saslauthd.h:190:1: warning: "KRB5_HEIMDAL" redefined <command-line>: warning: this is the location of the previous definition auth_krb5.c: In function 'auth_krb5_init': auth_krb5.c:105: warning: assignment discards qualifiers from pointer target type auth_krb5.c:106: warning: assignment discards qualifiers from pointer target type auth_krb5.c: In function 'auth_krb5': auth_krb5.c:184: error: 'krb5_verify_opt' undeclared (first use in this function) auth_krb5.c:184: error: (Each undeclared identifier is reported only once auth_krb5.c:184: error: for each function it appears in.) auth_krb5.c:184: error: expected ';' before 'opt' auth_krb5.c:233: error: 'opt' undeclared (first use in this function) *** Error code 1 Stop in /usr/ports/security/cyrus-sasl2-saslauthd/work/cyrus-sasl-2.1.26/saslauthd. *** Error code 1 Stop in /usr/ports/security/cyrus-sasl2-saslauthd/work/cyrus-sasl-2.1.26/saslauthd. *** Error code 1 Stop in /usr/ports/security/cyrus-sasl2-saslauthd.
with some expert advice from the port maintainer, Hajimu UMEMOTO (what is not to love about BSD and open source? Something goes wrong, the guy who knows everything about it tells you how to fix it right away). He correctly ascertained that I had
security/krb5 installed, a dependency of
openssh-portable. Kerberos, HEIMDAL and GSSAPI occasionally have interactions, but his advice was to make with the directive
KRB5_HOME=/usr/local. I put this into
/etc/make.conf to make it permanent, deinstall/reinstalled
security/krb5 and then
cyrus-sasl-2.1.26 compiled perfectly.
Thanks Mr Umemoto!
Friday, January 4, 2013
The certificate system is badly broken on a couple of levels and the most recent revelation that Turktrust accidentally issued two intermediate SSL CAs which enabled the recipients to issue presumptively valid arbitrary certificates. This is just the most recent (probably the most recent, this seems to happen a lot) compromise in a disastrously flawed system including the recent Diginotar and Comodo attacks. There are 650 root CAs that can issue certs, including some CA‘s operated by governments with potentially conflicting political interests or poor human rights records and your browser probably trusts most or all completely by default.
It is useful to think about what we use SSL certs for:
- Establishing an encrypted link between our network client and a remote server to foil eavesdropping and surveillance.
- To verify that the remote server is who we believe it to be.
Encryption is by far the most important, so much more important than verification that verification is almost irrelevant, and fundamental flaws with verification in the current CA system make even trying to enforce verification almost pointless. Most users have no idea what what any of the cryptic (no pun intended) and increasingly annoying alerts warning of “unvalidated certs” mean or even what SSL is.
Google recently started rejecting self-signed certs when attempting to establish an SSL encrypted POP connection via Gmail, an idiotically counterproductive move that will only make the internet less secure by forcing individual mail servers to connect unencrypted. And this is from the company who’s cert management between their round-robin servers is a total nightmare and there’s no practical way to ever be sure if a connection has been MITMed or not as certs come randomly from any number of registrars and change constantly.
What I find most annoying is that the extraordinary protective value of SSL encrypted communication is systematically undermined by browsers like Firefox in an intrinsically useless effort to convince users to care about verification. I have never, not once, ever not clicked through SSL warnings. And even though I often access web sites from areas that are suspected of occasionally attempting to infiltrate dissident organizations with MITM attacks, I still have yet to see a legit MITM attack in the wild myself. But I do know for sure that without SSL encryption my passwords would be compromised. Encryption really matters and is really important to keeping communication secure; anything that adds friction to encryption should be rejected. Verification would be nice if it worked.
Self-signed certs and community verified certs (like CAcert.org) should be accepted without any warnings that might slow down a user at all so that all websites, even non-commercial or personal ones, have as little disincentive to adding encryption as possible. HTTPSEverywhere, damnit. Routers should be configured to block non-SSL traffic (and HTML email, but that’s another rant. Get off my lawn.)
Verification is unsolvable with SSL certs for a couple of reason, some due to the current model, some due to reasonable human behavior, some due to relatively legitimate law-enforcement concerns, but mostly because absolute remote verification is probably an intractable problem.
Even at a well run notary, human error is likely to occur. A simple typo can, because registrar certs are by default trusted globally, compromise anyone in the world. One simple mistake and everybody is at risk. Pinning does not actually reduce this risk as breaks have so far been from generally well regarded notaries, though rapid response to discovered breaches can limit the damage. Tools like Convergence, Perspectives, and CrossBear could mitigate the problem, but only if they have sufficiently few false positives that people pay attention to the warnings and are built in by default.
But even if issuance were somehow fixed with teams of on-the-ground inspectors and biometrics and colonoscopies, it wouldn’t necessarily help. Most people would happily click through to www.bankomerica.com without thinking twice. Indeed, as companies may have purchased almost every spelling variation and point them all toward their “most reasonable” domain name, it isn’t unreasonable to do so. If bankomerica.com asked for a cert in Ubeki-beki-beki-stan-stan, would they (or even should they) be denied? No – valid green bar, invalid site. Even if misdirections were non-SSL encrypted, it isn’t practical to typo-test every legit URL against every possible fake, and the vast majority of users would never notice if their usual bank site came up unencrypted one day with a DNS attack to a site not even pretending to fake a cert (in fact, studies suggest that no users would notice). This user limitation fundamentally obviates the value of certs for identifying sites. But even a typo-misdirection is assuming too much of the user – all of my phishing spam uses brand names in anchortext leading to completely random URLs, rarely even reflective of the cover story, and the volume of such spam suggests this is a perfectly viable attack. Verification attacks don’t even need to go to a vaguely similar domain let alone go to all the trouble of attacking SSL.
One would hope that dissidents or political activists in democracy challenged environments that may be subject to MITM attacks might actually pay attention to cert errors or use perspectives, convergence, or crossbear. User education should help, but in the end you can’t really solve the stupid user problem with technology. If people will send bank details to Nigeria so that a nationality abandoned astronaut can expatriate his back pay, there is no way to educate them on the difference between https://www.bankofamerica.com and http://www.bankomerica.com. The only useful path is to SSL encrypt all sites and try to verify them via a distributed trust mechanism as implemented by GPG (explicit chain of trust), Perspectives (wisdom of the masses), or Convergence (consensus of representatives); all of these seem infinitely more reliable than trusting any certificate registry, whether national or commercial and as a bonus they escape the cert mafia by obviating the need for a central authority and the overhead entailed; but this only works if these tools have more valid positives than false positives, which is currently far from the case.
Further, law enforcement makes plausible arguments for requiring invisible access to communication. Ignoring the problematic but understandable preference for push-button access without review and presuming that sufficient legal barriers are in place to ensure such capabilities protect the innocent and are only used for good, it is not rational to believe that law enforcement will elect to give up on demanding lawful intercept capabilities wherever possible. Such intercept is currently enabled by law enforcement certificates which permit authorized MITM attacks to capture encrypted data without tipping off the target of the investigation. Of course, if the US has the tool, every other country wants it too. Sooner or later, even with the best vetting, there is a regime change and control of such tools falls into nefarious hands (much like any data you entrust to a cloud service will sooner or later be sold off in an asset auction to whoever can scrape some residual value out of your data under whatever terms suit them, but that too is a different rant). Thus it is not reasonable for activists in democracy challenged environments to assume that SSL certs are a secure way to ensure their data is not being surveilled. Changing the model from intrinsic, automatic trust of authority to a web-of-trust model would substantially mitigate the risk of lawful intercept certs falling into the wrong hands, though also making such certs useless or far harder to implement.
There is no perfect answer to verification because remote authentication is Really Hard. You have to trust someone as a proxy and the current model is to trust all or most of the random, faceless, profit or nefarious motive driven certificate authorities. Where verification cannot be quickly made and is essential to security, out of band verification is the only effective mechanism such as transmitting a hash or fingerprint of the target’s cryptographic certificate via voice or postal mail or perhaps via public key cryptography.
Sadly, the effort to prop up SSL as a verification mechanism has been made at the compromise of widespread, low friction encryption. False security is being promoted at the expense of real security.
That’s just stupid.
Friday, January 4, 2013
Off-Site scripts are annoying.
To explain – I use noscript (as everyone should) with Firefox (it doesn’t work with Chrome: I might consider trusting Google’s browser for some mainstream websites when it does, but I don’t really like that Chrome logs every keystroke back to Google and I’m not sure why anyone would tolerate that). NoScript enables me to give per-site permission to execute scripts.
The best sites don’t need any scripts to give me the information I need. It is OK if the whizzy experience is degraded somewhat for security’s sake, as long as that is my choice. Offsite scripting can add useful functionality, but the visitor should be able to opt out.
Most sites use offsite scripting for privacy invasion – generally they have made a deal with some heinous data aggregator who’s business model is to compile dossiers of every petty interest and quirk you might personally have and sell them to whoever can make money off them: advertisers, insurance companies, potential employers, national governments, anyone who can pay. In return for letting them scrounge your data off the site, they give the site operator some slick graphs (and who doesn’t love slick graphs). But you lose. Or you block google analytics with noscript. This was easy – block offsite scripts if you’re not using private browsing or switch to private browsing (and Chrome’s private browsing mode is probably fine) and enjoy the fully scripted experience.
But I’ve noticed recently a lot of sites are borrowing basic functionality from Google APIs. Simple things, for which there are plenty of open source scripts to use like uploading images – this basic functionality is being sold to them in an easy to integrate form in exchange for your personal information: in effect, you’re paying for their code with your privacy. And you either have to temporarily allow Google APIs to execute scripts in your browser and suck up your personal information or you can’t use the site.
Thursday, January 3, 2013
With FreeBSD 9.1 out, it is time get all my ports upgraded in advance of doing the OS update. The process is fairly painless, but occasionally, especially if you are slacking in the updates, a change in configuration causes the usually completely automatic “
portupgrade -ra” to fail.
One such update was “
Upgrading 'openldap-sasl-server-2.4.31' to 'openldap-server-2.4.33_2” which failed with a
===> openldap-server-2.4.33_2 conflicts with installed package(s): openldap-sasl-client-2.4.33_1 They install files into the same place. You may want to stop build with Ctrl + C. ===> License OPENLDAP accepted by the user ===> Found saved configuration for openldap-server-2.4.33 ===> openldap-server-2.4.33_2 conflicts with installed package(s): openldap-sasl-client-2.4.33_1 They will not build together. Please remove them first with pkg_delete(1). *** Error code 1 Stop in /usr/ports/net/openldap24-server.
But because this is FreeBSD and the open source community actually provides support, unlike, say Microsoft, where such an error would languish for months, if not years, with out a patch or any advice on how to fix it, the port maintainer, Xin Li, answered my question in less than 20 minutes with the following advice:
cd /usr/ports/net/openldap24-server make config
Check “SASL” is checked?
Following his directions, everything compiled perfectly.
Friday, June 29, 2012
The latest (as of this writing) GCC port to FreeBSD 9.0 ended up creating some compile problems when I did a
portupgrade -ra: /usr/ports/graphics/tiff couldn’t find some libraries:
g++46: error: /usr/local/lib/gcc46/gcc/x86_64-portbld-freebsd9.0/4.6.3/crtbeginS.o: No such file or directory g++46: error: /usr/local/lib/gcc46/gcc/x86_64-portbld-freebsd9.0/4.6.3/crtendS.o: No such file or directory *** Error code 1
The problem is that there is no more 4.6.3 directory once you install 4.6.4. I didn’t bother debugging the port problem, though I probably should have and informed the port maintainer and all of those good citizenship steps but instead took a shortcut that solved the problem:
cd /usr/local/lib/gcc46/gcc/x86_64-portbld-freebsd9.0/ ln -s 4.6.4 4.6.3 cd /usr/ports/graphics/tiff make clean portupgrade -ra
And all is good.
Wednesday, May 9, 2012
What happened to 1920×1200 laptop displays? Why are all new laptops regressing to 1920×1080? That’s the most asinine, disappointing regression since the end of commercial supersonic transport. It is so sad to be living in a world that is moving backwards at an ever accelerating pace.
My first transportable computer was a Mac Portable with a 640×480 screen and I lived with that through a couple of generations. Eventually I got a Dell with 1440×900 pixels and could actually do some real work on it. About 10 years ago I got a Dell M70 with 1900×1200 pixels on a 15.4″ screen and found an acceptable resolution for portable work. Little did I know that the era from about 2000-2010 would be the apex of laptop technology. It is all downhill from here.
Once I looked forward to a bright future with 17″ displays sporting about the same generally usable pixel pitch (about 147 pixels per inch). If the world had continued to advance technically, if the now retired SR71 wasn’t still the fastest, highest flying plane ever built, if the now retired Concorde wasn’t the only commercial supersonic aircraft, if the retirement of the space shuttle didn’t herald the end of US’s manned space flight capability, if we weren’t living on the burnt out ruins of our former capabilities watching our technical competency spiral down the toilet, we’d have WQXGA (2560×1600) 17.4″ laptops right now. Maybe even QXGA 15.4″ options for those of us with good eyes.
But we don’t. We have bizarre stupid Vaio VGN-AW11M/H with kid friendly 104 PPI displays sporting useless 1680×945 pixels on an 18.4″ screen. That’s a pixel pitch straight out of 1990. Thanks for nothing.
Nobody even makes a reasonably sized laptop with a 15.4″ screen with more than 1920×1080 pixels any more (the only WUXGA laptop I can find at any size is the oversized kidz pitch 17″ macbook pro). I’m going to have to stick with my W500, or buy used ones for the rest of my life. Laptop makers – there’s no way I’m going to regress to a less productive smaller pixel count. That’s just stupid. Pull your heads out and give us pixels. The only thing that really matters for productivity is pixels. More pixels=better. Less pixels=worse. Don’t bother releasing a new laptop if it is worse. If you’ve lost the competency, just pack it up.
Apple: the 264 PPI pitch of the 3rd gen ipad is pretty good. If you build a 15.4″ macbook pro with that pitch in QFHD (3840×2160) pixels instead of the bizarrely large type kid’s book useless 1440×900 pixel resolution the current 15″ macbook pro is crippled by, I would actually buy one to run Ubuntu on. And maybe even have a bit of hope for the future.
(I’d suggest refraining from buying a laptop until 2013: ivy bridge will make 1920×1080 laptops as quaint as those 640×480 displays from 1990: the era from 2010-2013 may be known as the dark ages of laptops.)