How Android’s “won’t fix” problem is the result of poor standardization

During the past year, the WebView vulnerability(ies) in Android have been making the rounds in various technology-focused websites. More recently, another WebView vulnerability was discovered, affecting versions 4.3 and below of the popular mobile OS (or roughly 60% of the users). Three days ago, HotHardware released a piece on why Google will not patch this vulnerability on 4.3, let alone older versions.

As a quick reminder, Android 4.3, the last version of the Jelly Bean series of releases was launched on July 24th 2013 and its last point release (4.3.1) on October that year. That was 15 months ago. A device that shipped with this Android version was the second-generation Nexus 7, which is still under warranty on places where two-year warranty is mandatory, like in the EU. The Nexus 7, being a flagship Android device from Google, received updates to more recent Android versions; the same can’t be said about most other devices released with 4.3 or earlier.

Market Share of Different Android versions

Those 60% sure would like to be in the 39%.

 

Most of the discussion so far has been centered around whether the responsibility to patch older Android versions and/or push new ones to phones is on Google’s side or on the manufacturers’ side, or if the problem really is with the carriers, which won’t update their customized builds of the OS. There’s also the line of discussion that says such responsibility does not exist, because the problem is fixed in the latest Android version, and anyway, For God’s sake, are you still using a phone that came out six months ago? So vintage. Oh wait, how are you not using a high-end phone from <insert major brand>? (and even high-end phones sometimes don’t get updates past the next major release)

I would like to shine light on another side of the problem: the fact that smartphones, tablets and devices alike can’t be updated by the user software-wise. In fact, it’s not just the user who can’t update or choose to run a different operating system: I’m convinced that for the most part, if the manufacturers wanted to update their Android systems to a more recent OS version, or switch to, say, Windows Phone or Firefox OS, they would have much trouble themselves. And I pinpoint this down to two different but related issues, the lack of a proper drivers system on Android (possibly involving Linux) and the multitude of ways these devices boot their OS, expect updates and do basic hardware communication. Both issues are related to a bigger problem: the lack of standards in the world of embedded consumer electronics.

In this text I’m letting aside all the arguments regarding “open source vs. closed source”, “walled garden vs. open garden”, “but but binary blobs!”, etc. Both theory and practice have evidence that these debacles and inconveniences don’t matter, or there are ways to work around them that are successfully used in practice. The only “inconvenience” that might remain, is the hardware manufacturers’ wish for people to replace their “old” devices every six months or so. This turns out to be a game of extortion made for those who worry about their security: “if you want a OS patched against this horrible vulnerability, just buy a new device that won’t do much more than your current one, but will have that single line of code changed”.

In a perfect world though, manufacturers which wanted to play that game would have to do it in the clear, by explicitly locking their devices (as most already do) and announcing on the box that there will be no updates, fixes or warranties software-wise. (Curiously, the texts that say such things are usually free-as-in-beer software licenses, not software you pay for in the form of hardware). But letting aside the utopia and focusing on the two standards-related issues I mentioned before.

I said Android doesn’t have a proper drivers system. This statement can be taken as incorrect, because, after all, Linux is the part of the stack responsible for driving the hardware. But while Linux is not Android, Android definitely includes Linux, and their creators and maintainers make a deliberate choice to use this kernel. I’m not saying it’s a bad choice, well on the contrary – only Linux and a few other Unix-like kernels could scale down and adapt to the hardware and ARM architecture used in most handheld consumer devices.

Using Linux is taking a giant shortcut (again, that isn’t bad – reusing is good). Microsoft, for things like the (abandoned) Windows RT and Windows Phone, besides porting some of the upper layers of the Windows stack and developing new ones, also had to do additional work to get the NT kernel to run on such hardware. It’s worth mentioning that despite that effort, Windows Phone 8+ has hardware requirements higher than those of Android (comparing versions released in the same time span, please correct me if I’m wrong).

Going back to the drivers, many people say the big roadblock to making new Android releases run on (relatively) old hardware is the binary blobs, the closed-source drivers that control much of the hardware in those embedded systems. Now, a bit of anecdotal evidence: I use proprietary drivers from at least Nvidia and Broadcom on the Linux install on my laptop, and these have survived fine upgrades from Linux Mint 15 to 17, and multiple Linux kernel updates from at least 3.8.8 to 3.14.27. This is because the proprietary part is well separated from the things that can possibly change between kernel versions, and there are clear update paths defined.

Of course it helps if the maker of the proprietary drivers is interested in having their drivers run in newer operating system versions, but if all drivers were properly developed and not added into the system as ugly kernel patches (or should I say, “hacks”?) for which nobody has the source, as I’ve seen System-on-Chip manufacturers do (looking at you, Mediatek, Realtek, …), the problems would be mostly gone. The practice of doing such ugly source editing is one of the reasons I say that even if manufacturers wanted to, they couldn’t switch to another OS or update to more recent Android versions. I suspect that at some companies, just a few months after devices ship, even high-end ones, entire source trees, complete git repos, are rm -r-ed out of every system. Nowhere does the GNU GPL say that it’s not a violation of the license if you get rid of the source, does it? As if such license was ever read by said people…

There is another “entertainment” awaiting those who take the updating matter into their own hands and attempt to port the OS of their liking to their device, which is understanding how the device expects to be updated and how it starts its OS. While this is sometimes just a case of watching updater software do its job (that is, when an update is even available), often additional steps are needed, and this is where one finds out that most devices use U-Boot, but often it’s even more patched than the Linux kernel, and again, source code is nowhere to be seen.  There is then a myriad of ways to boot the kernel and from there to starting userspace, and fortunately this is more or less constant between Android devices. Still, undocumented quirks are everywhere, and one basically has to work with each device on an individual basis. The same model has various versions? Great, expect to repeat that work for each version.

G1,_Nexus_One,_Nexus_S,_Galaxy_Nexus

These all have a color screen, a speaker, a microphone, some buttons, and can make calls. It’s 2015, standards exist, they must be really similar, right? Yes, as long as you don’t attempt to change their OS…

 

And finally, we get to what I personally think is the core of the issue: each device is too much of an individual situation, and work must be done for each device. It’s been like this since, well, ever – for well more than a decade, since what can be called the first smartphone was launched (HTC Wallaby). In the beginning, I think this was justified – the hardware was not very powerful to be able to handle complex software abstractions and advanced boot methods, nor did software advance at today’s pace. Consumer handhelds were also not as ubiquitous as today. We can compare this to the evolution of the Personal Computer, where in the end everyone settled around the IBM PC standard. A corresponding standard for the smartphones and tablets everyone has is yet to be found – such a standard is what enables one to buy almost any computer off the shelf and install a different OS in it, or a different version of the same OS. It would also allow for buying devices without OS preloaded. This means the user would be able to control its user experience and security. I would no longer have to buy a new phone to stay safe, just because (and this would happen inevitably – no software is bug-free) a vulnerability was found in Android 4.2.

Sure, despite the PC standards, there are computers in the market which come as locked down as today’s tablets and smartphones. And there is no problem with that, as long as such locked-down things are not the only option. When locked-down is the only option, or unlocked options are prohibitively expensive, there is little room for innovation, consumers end up not having much to choose from, and eventually, no way to have durable hardware, if all the available alternatives support the “update the hardware to update the software” scheme.

Even in today’s context, there are better ways to ensure operating systems keep up-to-date in terms of security, without exactly requiring a change to another version. Google should look a bit more into Microsoft, which got one thing right on Windows for over ten years: Windows Update. Microsoft ensures support for a specified number of years for its OS, independently of the hardware it runs on; this is something consumers like and enterprises love. Google seems to have learned, so much that it is moving a lot of things that were previously built into Android to Google Play Services, a component that can be updated through the Play Store like other apps. Unfortunately, this means making more and more of the OS closed-source, but that’s another subject. Personally, I would rather pay, say, 10 to 20% of the original price of my phone with each update, than having to buy a new phone when I definitely don’t need one except for the bits executing in its CPU which all of a sudden are “old” and insecure.

I believe an update scheme a-la-Microsoft would be profitable for Google and let them have a bigger market share in the enterprise. (Actually, if Google is taking any of that market share, is because of the “cloud! factor” and because enterprises are moving to Google’s systems as “it’s what everyone uses”, and not because it fits their needs better). It could be perceived as terrible for hardware manufacturers, because there would be one less reason to buy new devices, and let’s not forget Google also sells hardware. Apple sells hardware too, and people happily run Windows, Linux or whatever on their Macs and MacBooks, and I doubt Apple has lost any business because of that: well on the contrary. It shows the two things don’t need to be exclusive. Apple still manages to sell a lot of Macs and people who want to stay with an older machine still enjoy updates for much longer. In their line of consumer handhelds, while it is perceived as being even more locked down than the competition, each model tends to get at least two major OS updates (for free!), making people who aren’t in an “upgrade cycle” happier.

I am actually surprised and annoyed that consumer rights associations don’t complain more about the situation. It seems that certain companies were successful in sinking into people’s minds the idea that in the case of phones, tablets, smart watches, etc. the software can’t be decoupled from the hardware. In fact, in its current state, it’s really hard to decouple it, but it’s because that’s what manufacturers want, not because of technical obstacles. Perhaps this thinking comes from the fact that, after all, the introduction of smartphones and tablets to the general public was done by Apple, which presented their vertically-integrated walled-garden first and foremost, and giving everyone else the idea that was the only way these devices would ever be successful.

To finish, another anecdote. I have bought a cheap unknown-brand tablet with a x86-64 Intel CPU. It runs full Windows 8.1 and is fully up-to-date thanks to Windows Update; I’m very happy with it. When Windows 10 comes out I plan to install it; either the upgrade is as easy as from 8 to 8.1, or I’ll install it manually by connecting a USB stick and using the UEFI. As we know, Windows is closed-source, and drivers are nothing more than closed-source “binary blobs”. Still, I know I’ll be able to install most if not all of these drivers in Windows 10, to a point where I can use that version of Windows on the hardware I have now. Perhaps I’ll need to throw some money at Microsoft to have Windows 10, if that idea of giving it for free to users of 8.1 and 7 turns out to not apply to me. Had I bought an Android tablet, I could throw money at Google and at the manufacturer, and I’m sure that after a year or so, neither would put a single update out for the hardware. The money would have rendered a new piece of hardware, yes… but of how much use is another piece of plastic and silicon, when the previous one works perfectly? They sure like to contribute to e-waste.

Related question: are there any phones running full x86 Windows? Perhaps once Windows 10 comes out?

Distributed systems and mersit, a Tiny Server Redundancy Manager

My previous post on this blog was published by the end of the long-gone month of June. Many things have changed since then, for example, I entered university and was pressed into creating a Facebook account (more or less separate from the rest of my online presence, so don’t look for me, I won’t add you). On that post, I rambled about the recovery from a big server outage that costed 42 hours of tny.im downtime, and over one week of server downtime. I learned my lessons (I doubt BlueVM learned theirs, but that’s a whole other story), and I went forward with what I said I would do: “setting up a new advanced and redundant system” for ensuring tny.im is always up.

That system has been up and running for over two months now, with varying amounts of servers making the redundancy and load balancing, and a plethora of occasional hiccups. Right now it’s composed of three virtual servers (all from different providers…), but there were times when it was composed of five servers. These three servers are paid, and while they aren’t exactly expensive (but not the cheapest, either), you can imagine the bill, so let’s not talk about tny.im profitability now, OK? (I have kind of given up).

In the spirit of the great statisticians of our time, here's a graph without title, labels or axis.

In the spirit of the great statisticians of our time, here’s a graph without title, labels or axis.

However, having three servers serving the same website, with all three of them being almost a clone of each other (which means, all have the same files and database contents, synced), in a DNS round-robin setup doesn’t directly lead to greater uptime. In fact, I have found out it can lead to more outages, since now the total downtime is approximately the sum of the downtime of each server. Of course, most of these outages are partial (as in, only users unlucky enough to have their DNS request resolved to the IP of a server that is down, will actually perceive the site as down), except for when the MariaDB replication freaks out and basically grinds all database operations, on all servers, to a halt, requiring a complicated manual restart of all MariaDB instances, in a specific order (yes, I have spent many hours searching for an alternative database system, and couldn’t find any that met my requirements).

In order to actually achieve greater uptime, one must have a system that automatically manages the DNS records so that the domain(s) of the website in question never have any records pointing to servers that are down. In other words, the “sheep” must be “hidden from sight” as soon as they go “bad”, and should be put back “in stage” once they become “good”. Being DNS something that was definitely not made for real-time record edits, with many systems caching DNS request results well beyond the specified TTL, this system obviously doesn’t ensure that the “bad sheep” are not invisible to everyone watching the show. But if it manages to do it for even a small percentage of the public, it’s already better than not hiding from anyone (and especially, if it successfully hides the problem from the uptime monitor, that’s even better 🙂 ). This explains why the DNS records for tny.im are set with TTLs of five minutes.

The development of such a DNS record management system was also more or less contemplated in my previous post, when I say:

I’ll take this downtime and new server acquisition as the motivation for setting up a new advanced and redundant system, so that if one server goes down, tny.im (and possibly this blog too) will continue to operate as normal.

And in the end, in a later edit:

On related news, Mirasm – the Tiny Server Redundancy Manager – is mostly finished, only needs some more testing to be put on production servers, managing the new tny.im redundancy system.

“Mostly finished”, as we all know, really means “It’s 99% ready, I only need to figure out the remaining 1% that consists on… everything that is tricky and I’m not sure how it’s done”. This is specially true in this case, as I had high requirements for my manager: it couldn’t use any resources other than the servers I had already (it would’ve been easy to have a separate server just for monitoring and editing DNS as needed, but I didn’t want to pay for yet another server on yet another provider), and it couldn’t fail more than tny.im itself. In fact, the time when the manager has to do more important work, is when it is not working, i.e. when a server goes down and so goes the manager. I finally finished the project, and it works as planned. I only got the name wrong…

Introducing mersit, a Tiny Server Redundancy Manager

Pronounced “m-eh-rs-ee-t”, with the first “e” being like the one in “explain”, mersit is a simple Python script (Python 2.7, because I wasn’t sure what libraries were available for 3.x nor if my servers would run it well) hacked together with some sections that definitely look like spaghetti code. The good news is, it works fine, and has been well tested, so if you study it in the “black box” way, there are no big problems with it.

The purpose of the script is to manage the DNS records of the website served by the group of synced servers, in this case, tny.im. It runs on each server, in a peer-to-peer fashion. The peers select a single master, that will monitor all the peers and manage the DNS as they go up and down, “deciding who’s on stage”, and all peers will check whether the master is up, and select a new one that will edit the DNS to “hide the master from the public” when it goes down.

I definitely want to open-source mersit at some point, but not now because it’s not ready for prime-time (see “spaghetti code”, above), and I want to change some things that will make it more general-purpose. mersit has been managing the live records for tny.im for the past week (it’s been peaceful).

Continuing our journey through the world of meaningful graphs, here's another.

Continuing our journey through the world of meaningful graphs, here’s another.

I have gone so far as to write a read-me for mersit (mainly for me to read, as I know I’ll forget how it works within six months). I think it’s best if I put the start of the read-me here, instead of trying to explain it all, once again:

mersit - Tiny Redundant Server Manager
Copyright 2014 tny. internet media
This version is customized for tny.im

This is a Python 2 script that manages a group of computers/servers/thin clients/machines in a network (local- or wide-area), by automatically executing actions when something relevant happens to one of the machines.

We'll call the "machines" "peers". mersit assumes all peers and the network are trusted.

The script is meant to be run directly on the peers that are to be controlled, in a setup where there is not a single point of failure. It is not of much use when run in a single peer; in the context of this script, a "group" only starts to make sense when it has over one element.

We'll refer to this script as "controller software" or simply "controller", and to the other software that runs on a peer and which is to be monitored as "application". The controller is made to run unattended, even though it accepts commands (issued by an "operator") to trigger certain behavior manually.

The "something relevant" mentioned in the first paragraph consists on one of these "events of interest":

- A peer goes "online", that is, it is reachable by other peers and reports the status of its controller software as "OK" or "ready";

- A peer goes "offline", that is, it is either not reachable by at least some peers, or the controller is reporting its state as "not good" or "not ready";

- A peer becomes good-for-work (GFW), which means, that the application is functioning properly and performing its function (such as listening for incoming connections, data to process, etc.);

- A peer becomes not-good-for-work (NFW), in which case the application is not functioning properly, is too busy to perform its function (over capacity), or is otherwise unavailable.

Each peer works in a given "domain", which is the group the peer belongs to. The domain is specified by a name and secret which act basically like a username and password pair. Peers will only communicate with other peers of the same domain, that is, peers where the domain name and secret are the ones the controller is configured to use. The domain acts as the authentication element; an external party can not join, communicate or perform actions in a domain unless it knows the name and password used by the peers of the domain.

(Please note, that communication between peers is not encrypted by the controller - it goes completely plain-text over the network. It is possible to secure the communication between peers using external tools; such secure functionality goes beyond the scope of this software. The "domain" is simply a basic authentication system, implemented using HTTP authentication, to ensure that peers of a certain group don't start talking with peers from other groups. The basic authentication system is enough to protect against the casual script-kiddie, but by no means adequate for protection from a malicious party in an untrusted/open network)

The controller on each peer must know _a priori_ (i.e. before it starts) about where to find at least some of the controllers on other peers. Peer discovery doesn't happen automatically, however, once a peer's controller can communicate with another controller, it will add every controller in the "contact list" of the latter to its "contact list".

Imagine the following situation: you have peers A, B and C (and their controller software). The controller in A only knows about peer B. The controller in B only knows about peer C. If you start the controller on peer A, then start the controller on peer B, peer A will tell peer B about its existence, and peer B will tell peer A about the existence of a peer C (independently of peer C being running/reachable). However, if the controller in A knew about no peer (other than itself), it would never find peer B or C even if their domain settings all matched. Even though a big domain can be bootstrapped from just two peers, to ensure good operation, all controllers should know about all peers. This way, if the controller on a peer resets for some reason, it will have a greater chance of reaching another peer.

The "contact list" is the list of "known" peers. The controller keeps three lists of peers in memory: the "known" peers, the "reachable" peers, and the "GFW" (good-for-work) peers. The list of known peers is initialized from the source code's configuration section when the controller starts. It then proceeds to see which peers are "reachable", that is, can be reached through the network, are in the same domain (not being in the same domain gives the same effect as not being reachable over the network) and have their controller software report its state as "OK".
This initial status checking includes the exchange of some other information about the controller. Once this initial peer identification is done, the controller enters a monitoring loop where it will keep the contents of the three lists up-to-date. The controller keeps running this infinite loop throughout most of its lifetime. How the lists are kept up-to-date and what happens when their contents change is something that depends on the current controller mode.

There are two possible modes for controller operation: master and non-master. There is exactly one controller in master mode per domain, and this controller is usually called "the master" (the master peer has the controller in master mode). The differences between the modes are mostly related to what happens in the monitoring loop, but before going into those differences, it is important to understand how the controllers decide which peer is the master peer.

When a controller starts and there are no reachable peers, it promotes itself to master, since there must be exactly one master per domain. Later, when another controller joins the domain (either because it started or because it went online after e.g. a period without connections or power), it checks which peers are reachable from its "known" list and "asks" them which is the master peer. Every peer should reply with the same peer, in which case the new controller assumes that peer is the master, and informs the master about its existence, to account for the fact that the new peer may not be in the master's "known" list.

However, and especially on domains where not all peers initially know about every other peer, it's possible that a "head split" occurs and there are two masters in the same domain. Imagine a domain where there are four peers D, E, F and G. D only knows about E, which in turn only knows about D. F doesn't know about any peer, and we'll leave G aside for now. All peers are offline.
The D controller starts up, sees it can't reach the only peer it knows (E), so calls itself master. The E controller starts up and reaches D, D says it is the master, E assumes D is master, all is fine.
The F controller starts up, sees it can't reach any peer because its "known" list is empty, so calls itself master and sits quietly waiting for someone to contact it, which in turn would let it know about more peers.
We now have the following situation ([M] represents a controller in master mode, --- represents the knowledge peers have of each other):

 -DOMAIN------------------
|                         |
|   D[M]-----E     F[M]   |
|                         |
 -------------------------

Things could be like this forever, and no conflicts would occur - however, this is probably not a domain you want to have, since F doesn't know about any "event of interest" related to D or E, and these two don't know about any events related to F. In this situation, D--E and F act like separate domains.
Assume that G is a peer which knows about D, E and F, and that its controller starts up, contacting D, E and F. The first two will agree that D is the current master, but F will disagree and say it is the master. At this point we have a conflict. There are many ways to solve this, including some form of "voting" (e.g. the peer the largest amount of the peers say is the master effectively becomes it), but mersit solves this in a simpler way.

The controller checks that everyone in the domain agrees on what peer is the master on every iteration of the monitoring loop. It does this by "asking" each peer in the list of known peers who is the master. The first peer asked is free to reply with any peer. The ones that are asked next must agree with the first one. If not, the controller that was doing the loop tells each disagreeing peer that the actual master, is the one from the first peer's reply. It is possible that a minority is asked first, and thus everyone is forced to "change its opinion" to that of the minority. This is not a problem - mersit assumes all peers are trusted. Note that it can sometimes take some iterations of the monitoring loop for all peers to settle on a single master, because two (or more) peers may be trying to "change the opinion" of the other peers to different masters. This is not a problem either, because even if this kind of concurrency conflict happens once or twice in a row, it will stop happening as soon as one peer is faster than the other to tell everyone (including the other peer(s) that are trying to "change opinions"). What matters is that in the end, every peer knows about all others, and there is a single master. In this case, it could be D:

 -DOMAIN--------------------
|                           |
|  D[M]-----E-----F-----G   |
|                           |
 ---------------------------

If the master becomes unreachable, or its controller stops working, the other peers will also find themselves a new master, by sorting the list of reachable peers alphabetically and choosing the first peer in the sorted list. Of course, if for some reason the list is not consistent across peers, the peers will try to "convince" others to settle on who they "think" is the master as previously explained, until everyone is set to the same master.

Being the master essentially changes what happens in the monitoring loop. When a controller is in master mode, it is responsible for updating the list of "reachable" and "GFW" peers, by checking which peers are reachable (both in terms of network and in terms of functioning controller) and which have the application in a working condition. If there are changes in the lists that indicate an event of interest, it runs the appropriate handler. If, for example, a peer becomes NFW due to a problem in the application, it will stop being in the GFW list, and the handler function for when a peer leaves that list will be run with the peer in question as the argument. If the master becomes unreachable (network error, controller error, etc.), a new master will be found, as explained in the previous paragraph, and the new master is responsible for running the handler with the previous master as argument.

When a peer is not master, it won't run any handlers for events of interest, and it is not responsible for updating the "reachable" and "GFW" lists - it will retrieve these from the master. The controllers on all peers need to keep their lists up-to-date, sharing a "vision of the domain" similar to that of the master, so that any peer can become a master instantly in case of necessity, without having to spend time performing checks on all peers and ensuring it has the best-and-latest list of "known" peers.

The operator can manually tell a controller to become the domain's master. When the appropriate command is issued, the controller will send a command to every other controller instructing them to switch to the new master. This command may not always have an effect in some controllers, because while the first controller is sending the commands, other controllers are seeing if everyone agrees on who's the master, and issuing the same commands with another master in mind. This is a sequence of events picturing the situation, in a domain where there are three peers H, I and J, and H is the initial master:

 0. ...
 1. Peer H checks that every controller agrees it is the master (all agree);
 2. Peer I checks that every controller agrees H is the master (all agree);
 3. Peer J checks that every controller agrees H is the master (all agree);
 4. Operator issues command for peer I to become master;
 5. Controller on I assumes it is master, starts sending commands to other peers;
 6. Peer H checks that every controller agrees it is the master, before the message from I that I is the new master can get to H;
 7. Peer H finds out I (and possibly others) don't agree, sends them commands to change the master back to H;
 8. Peer I changes master back to H;
 9. Peer I checks that every controller agrees H is the master (all agree);
 10. Peer J checks that every controller agrees H is the master (all agree);
 11. ...
 
If the master doesn't change when the manual command is issued, it's a matter of trying again. Most often, this kind of concurrency problem does not occur, and even when it does, it does no damage. While it is true that mersit could detect this situation and keep issuing commands automatically until the decision takes effect, we chose to not make it this way to allow the human operator finer control.

The primary focus of mersit is to monitor a distributed application. The master checks if the application, or part of the application, running on a certain peer is in working condition by asking that peer's controller about the state of the application it is monitoring. In turn, this controller runs a function, defined by the mersit user in the mersit source code, that should check the application and return True (if application OK) or False (if not). This can involve, for example, making a HTTP request to a HTTP server in that peer to verify it is working. The controller then communicates the status of the application to the master (which may be itself). All this shouldn't take too long, especially when the domain has many servers, as only one peer is asked at a time. If checking the status of the application typically takes over one second, it is best to store the last known status in a variable, and update that state periodically in an asynchronous manner that may be external to the mersit script.

The part related to DNS records is not explained on the read-me, because it is related to the handlers (which each mersit user would customize to the specific needs of the system – as I said, I tried to make it a general-purpose script). Sounds interesting? Feel free to ask questions, or point out problems, in the comments.

Recovering from prolonged outage

tny.im and this website were down for about 42 hours, starting on June 29 at 03:16 UTC.

The problem? BlueVM’s S19-NY server went down, taking with it the server I have/had there (and which I paid for a full year!). Other than this outage, the service had worked fine for three months, – fast network, full resources availability – since I bought it.

S19-NY is still down, without any ETA for it to come back. There’s no information in what conditions it will come back, or *if* it will come back (with the previous contents, at least). BlueVM staff is pretty much unresponsive, other than a guy who sometimes hangs on IRC and says he can’t fix the KVM instance because he doesn’t have access to it.

Of course, I no longer recommend BlueVM and I don’t plan on renewing the server I have with them.

The “solution” to put an end to two days of downtime, was to buy the cheapest SecureDragon OpenVZ server, (OpenVZ! so hard to live without my beloved KVM, I can’t even use davfs2 because there’s no fuse module!) and restore the backups I had (from four hours before the BlueVM instance went down). This has been done, except for the HTTPS certificate of tny.im… that alone is another story:

As I tried to retrieve the existing cert from StartSSL (because, stupid me, automated backups were not copying everything SSL-related, and I didn’t save it locally), I found my authentication certificate had expired. This basically means my StartSSL account is lost, unless I create a new account and ask their staff to link it to the old one. They probably won’t do that without a payment and some ID checks so… out of question. I guess, if the BlueVM server doesn’t come back, that I’ll just create a new StartSSL account and generate a new cert for tny.im. There’s no security issue with this, as the previous certificate has not been compromised (unless BlueVM is collecting certificate private keys from inside their clients’ machines…) and so it doesn’t need to be revoked.

To conclude the HTTPS point: tny.im has the HTTPS service unavailable for now, until I can retrieve the existing certificate from the previous server, or until I get a new one.

Is all the fault in BlueVM’s side? Of course not… I could have lost my love for the money earlier, bought the SecureDragon VPS yesterday already and reduce the downtime by 24 hours. But since I had hope the problems on BlueVM support were just Sunday-related, I thought that by Monday they would have it fixed. They didn’t.

On related news, I’ll take this downtime and new server acquisition as the motivation for setting up a new advanced and redundant system, so that if one server goes down, tny.im (and possibly this blog too) will continue to operate as normal. I have two servers already (assuming the BlueVM one comes up), and I plan on developing a system where firing up new instances of tny.im on any empty server will be really easy. The system will be always prepared to lose any server at any point, without data loss, and restore full service within five minutes. That way, I can add less reliable hosts, perhaps even VPS trials, to the redundant system. This also allows for scaling the service as needed. Sounds ambitious? Of course it won’t happen in a week, but I have the full summer to develop and test the system…

Why don’t I just go with some SaaS that supports scaling? Two reasons: the price is too high, and the tny.im software is not coded in a way that’s compatible with these services. Let me remember you, that while not exactly being a CGI script, tny.im is not coded in one of those fancy modern languages, and even though PHP is not exactly outdated or unmaintained, the quality of the code can make it pretty horrible or pretty good. And the code is… not perfect – it doesn’t use any popular framework, it is based on YOURLS and has many, many hacks feature additions, plus a… very close relationship with the database.

Let me finish by saying that downtime of this kind is something to be expected if I were still hosted by FreeVPS and the like. But believe it or not, on FreeVPS and other sponsorships I’ve never seen a customer service as bad as the one of BlueVM (and it’s hard to remember an outage as big as this one). It is definitely not adequate for a paid service. In addition to S19-NY, they have many other instances down, with similar or worse downtime. The admins don’t appear to be online or reachable in any way, even by other staff members. The latest news/excuse on the S19-NY situation is that IPMI is broken and they are waiting for the provider to fix it… now tell me, does this look like a serious company, or some poor-man’s sponsorship?

EDIT: The BlueVM server is still down. tny.im is now hosted by three servers with round-robin balancing. HTTPS service was restored with a new certificate.

EDIT 7/7/2014: I forgot to update this post in time, but the BlueVM server has been up since three days ago. But I only got to know that the service was restored thanks to a friend of mine, because they didn’t reply to my ticket to inform me about it. Anyway, I don’t plan to renew this server, and BlueVM lost me as a customer (except for some really cheap deal which I’ll use as a personal/development box, and never in production).

On related news, Mirasm – the Tiny Server Redundancy Manager – is mostly finished, only needs some more testing to be put on production servers, managing the new tny.im redundancy system.

90/90/0.001

Looks like >90% of the Internet users spend >90% of their WWW-using time on the same 0.001% of websites.

It’s no wonder why plans for a truly decentralized Web never seem to get much traction… how could them, in a time when users’ habits are truly centralized? In fact, I believe the main problem is on the way users use the Web, not the technology iself.
As for the Internet, it is already (comparatively) decentralized.

Do users always know what’s best for them? The past has proven that is not the case…

Server status updates

Just in case you didn’t know/notice, this website and tny.im are no longer hosted on a VPS provided by FreeVPS. Due to issues on the node where that VPS was hosted, that made it become network-unreachable, I needed to buy a new server to avoid a longer downtime.

I ended up buying a KVM-powered VPS from BlueVM because they accept Bitcoin and had a good special deal at the time. It’s my third day with the server and so far, so good. It has better specs than the server I had, and after config optimization tny.im is now able to handle more concurrent requests than it did before. I will keep readers updated and post a detailed review of the server, later.

Moving away from sponsorships is be better in the long term (less worries, proper support…), even if it means that tny.im is now unprofitable (temporarily, hopefully).

Utilities v1.3 is out

If you watch other websites, pages and forum threads of mine, you may already know about this, but just to make sure you don’t miss it, v1.3 of Utilities is out. Download or more info.

Regarding dead Prizms

Regarding the problem of Casio Prizm calculators that hang on the power-off screen and then die when rebooted, I have the feeling that we [the Casio programming enthusiast community] have discussed this subject way more than Casio ever did – in the end, it may be easier and cheaper to fix or replace, say, 1% of the calculators they sell, than to pay lots of working hours to discuss and fix hard-to-reproduce bugs, delaying the production of newer models (in the end, nowadays people buy a calculator because it’s fancy, not because it will survive the apocalypse ).

To all the users with dead calculators… I know your situation is bad (especially if you didn’t have backups of the data in it), but just send the f***ing thing for repairs before the three year warranty expires. Make sure to explain how it broke, maybe they will fix it some day. And when the warranty expires, if you still need a graphic calculator, well, buy a new one (eventually not from Casio?).

Utilities version 1.2 is out; Casio retweets

In case you haven’t noticed yet, the version 1.2 of the Utilities add-in for the Casio fx-CG 10 and 20 (known as Prizm) has been released today. More information and download on this page.

Following my announcement on Twitter about the new release, it got retweeted by the official Casio Prizm Twitter account. This is a move without precedents on their part – no 3rd party add-in had ever received the slightest official public recognition. Most likely the social media marketeer retweeted my nice tweet without really knowing what he/she was doing.

However, this had little impact. The tny.im shortlinks you see received less hits by the time they retweeted, than by the time I originally tweeted – despite @CasioPrizm having over the quintuple of followers I have.

Data caps and app updates

Is it just my impression, or nowadays many people with smartphones, who also have a data-capped connection at home, conserve their list of apps to update until they go to a friend’s house, find free (or school/work) WiFi, or otherwise find means to avoid having automatic updates use data on your Internet connection?

Personally, I put all of my machines to update on the last two days of the month, to avoid that situation where my stupid ISP limits the connection speed to 128 kbit/s, after I use 15 GB of data in a single month (if I can’t avoid the situation, at least the speed will only be ridiculously low for at most two days)… yeah, that’s the state of 3G data plans in Portugal (across all ISPs).

The situation of the Casio Prizm

Note: this was originally published as part of a post on Cemetech.

The status of 3rd-party development (and general user interest) on what is currently Casio’s flagship non-CAS calculator, is disappointing and inglorious, but the user community is not the only guilty of the situation. I would say there is a marketing problem on Casio’s side: the Prizm is only appealing to students and teachers that are already used to Casio calculators. Personally, I know that if it weren’t for the recommendations of my maths teacher (who is a big proponent of these calculators for their ease of use and similar UX across graphic models), I would have bought a non-CAS Nspire instead, or eventually a black-and-white Casio model.

Despite great initial success (first on Omnimaga and then on Cemetech), the Prizm never really caught on with the developers community and I feel it really never caught on with general students, either. While it is true that the Nspire, and more recently the HP Prime, have more powerful hardware, the first also has a more complex system that actively tries to block 3rd-party binary software, and the second does not have the same target market (the HP Prime doesn’t have a non-CAS version). Cemetech seems to have turned more to the TI-84 Plus CSE, but while it doesn’t have the software constraints of the Nspire, it has inferior hardware specs that put it on another league (I guess it had some success on this community because it was similar to “what people were used to”, i.e. the old TI calculators, unlike the Prizm and the Nspires).
Still, and somehow, the Prizm seems to have a notable market share in Asia, but due to different character sets and more, the western and oriental communities don’t communicate much. From what I understand the Prizm seems to be used in China at a higher education level than in the rest of the world.

From my point of view, the marketing done by Casio for the Prizm, was as simple as saying “we were the first to release a full-color graphic calculator, here it is” and running a few contests while the model was new, but without any effort to distinguish themselves from the competition that would come later (and made a much bigger advertising effort in many markets). Even though they were the first to show a calculator with a full-color, high-resolution screen, while simultaneously being allowed on most official exams, I feel they did not fully explore the possibilities of the screen or the OS and hardware behind it, let alone explain them to users.

On the technical side, many aspects of the OS on the Prizm could have been polished (certain things as the Program editor feel really slow at the default clock speed, as do the constant picture decode and redraws when a g3p is shown on the screen, for example in eActivity). Things such as the separation between a “Main Memory” and “Storage Memory”, while familiar to existing users of Casio systems, are metaphors unused on other computer systems and while technically sound (and allowing for backwards compatibility), are inadequate for a great user experience – I know of people who don’t quite understand why they get memory errors on lists, matrices and Basic programs, even though they have plenty of storage memory, and I also know the problem in understanding different memory sections is common to TI calculators. OS updates never (are yet to?) addressed this, but it’s unlikely they’ll ever address it because it would require major technical changes, perhaps even hardware changes (more RAM or dynamic RAM allocation, anyone?) and the development of a platform that’s not akin to anything built by Casio in terms of calculators, which means users would need to relearn it again – if Casio builds something too much different from previous generations, the results might not be positive (look at how the Nspire went on the TI side).

Then Casio moved on to the new Classpad models (which not everyone can buy, because they are not allowed on all the exams, and not everyone needs a CAS calculator on university), and the Prizm was more or less forgotten. While Casio’s offering has some points that stand out from the competition, it has outdated hardware specs when compared to the other CAS calculators.

Casio calculators become “forgotten” not because the manufacturer stops providing support for them (the Prizm just received the 2.00 OS update, and a new official add-in – so things are well on the contrary), but because there is little effort to publicize these updates to their older models. I guess if they don’t move more, it’s because they are selling and working “good enough” for them. Which isn’t a synonym of things being “good enough” for the power user community.

In my opinion, the Casio calculator development community is too spread among many small communities, which have low levels of activity (especially when it comes to the Prizm) and in some ways even alienate from each other, instead of uniting to get things forward. Note that I’m not suggesting the creation of a new community to hold all the 3rd-party Casio development (see xkcd 927), but instead more communication and joint ventures between existing ones, for example in the form of contests. Unfortunately, different ideas and culture seem to make this difficult most of the time, but it would be great if people managed to overcome that in favor of higher goals.