On paid public beta testing

Important: This post was initially drafted in September 2023. I completely forgot to finish it by proofreading it and giving it a better conclusion. Rather than letting the work go to waste, I decided to post it now with minimal editing. The specific situation that prompted its creation has long passed, but I think the general thoughts and concerns are still applicable. While it is not relevant to the arguments being made, I’ll mention that since then, in mid-2024, I did finally buy and play through Phantom Liberty, which I thoroughly enjoyed.

The recent 2.0 update to Cyberpunk 2077, and near-simultaneous paid DLC release, made me reflect on the redemption arc that game has gone through. Just like record sellers like GTA V and Minecraft, it can’t possibly ever please all audiences, but I consider Cyberpunk to finally be in a state where it mostly warrants the massive marketing and hype it received leading to its release, which happened in December of 2020 – nearly three years ago. While I think the game had already become worth its asking price quite some time ago – the developers, CDPR, issued multiple large patches over the years – this recent 2.0 update introduces so many changes, from small details to large overhauls of the gameplay aspects, that it makes the 1.x series look like beta versions of a game that was two thirds into its development cycle.

In a world where so many software publishers have moved towards continuous releases with no user visible version numbers, or versioning that is little more than a build number, and considering that live service games are one of the big themes in the industry, it’s almost shocking to see a triple-A game use semantic versioning properly. And bringing semver to the table isn’t just a whim of a software developer failing to come up with good analogies: in semver, you increase the major version when you introduce incompatibilities, and 2.0 introduces quite a few of them. 2025 editor’s note: CDPR has started/resumed using “creative” minor patch numbers which I am not 100% sure strictly follow semver.

In technical terms, CDPR dropped compatibility for an older generation of consoles, and raised the minimum requirements on PC (although the game generally appears to run smoother than before, at least on hardware that was already above the recommended spec). But the possible incompatibilities extend to the flesh sitting in front of the screens: there have been major changes to character progression, balancing of different combat options, introduction of new gameplay features like vehicle-oriented combat and, although I haven’t been able to confirm this, perhaps even small changes to the consequences/presentation of some story choices. So players that were already comfortable and conformed with the way 1.x played, may be unhappy, or at least temporarily uncomfortable, with the changes made for 2.0. Personally, I am still on the fence about some changes, but to be honest, I haven’t explored this new version for more than three or so hours yet. 2025 editor’s note, hundreds of hours later: I can not presently mention any change I am not happy with.

There is no doubt that 2.0 is a major overhaul of CP2077, and I am convinced that overall, the changes were for the best, the game is a clearly better and more complete product now. This doesn’t mean there aren’t still more improvements that could be made. Furthermore, this game’s past cannot be changed, meaning that all the material needed for pages-long essays and hours-long “video essays” will forever exist. After all, there is a strong argument that the game took almost three years to be at the minimum level it should have been at release, considering the massive marketing campaigns and all the “half promises” made along the way. However, I strongly believe the game ended up becoming a better product than if its launch had been smooth; the space for the development of a 2.0 version, one that can afford to introduce this many changes to the gameplay and still be received positively, likely would not have been there if Cyberpunk had been just yet another “mildly disappointing considering the hype, but otherwise decent and competent” major release.

The main notion I wanted to explore in this essay is this perverse situation, where there is seemingly an incentive to release unfinished software as if it were finished, disappoint consumers, still end up profiting majorly and even, sometimes, with a greatly improved product to sell and plenty of praise to receive, in ways that would be difficult to achieve otherwise. “Entitled gamers” (often paying consumers) may be the noisiest about it, and gaming likely has the most prominent and recognized “redemption arcs,” but this situation permeates all sorts of software development, including areas that were once seen as “highly sensitive” or “mission critical”, such as certain embedded systems, communication systems, and finance software.

Note: it is probably a good time to remind readers that these opinions are my own and do not represent the views of my employer, nor my opinions on my employer’s views and strategy.

Software engineering can move fast and break things: the internet gives you the luxury of fixing things later. This is a luxury that not many other engineering disciplines can afford, but it is also a fallacy: you can’t issue a patch to your reputation nor to societal impact as easily as you can deploy a new version of your code.

Internet pervasiveness has been changing the risk management and product iteration paradigms around software development in the last two decades or so. In some embedded systems, the paradigm shift was named the “internet of things”, although embedded systems that can be upgraded remotely have been a thing for decades before the term was popularized – the real change is that there are many more of these now, in categories where they didn’t really exist before, such as home appliances. Connecting more and more things to the internet seemingly becomes more acceptable the easier they are to connect to a network, and many advancements were made in the hardware front to enable this.

In gaming, there is the well known concept of “early access” to clearly label products which are under development, liable to undergo significant changes, but already available to consumers. Some developers use this to great effect, collecting feedback and telemetry from large numbers players to ultimately end up with a better product. Outside of gaming, technically minded consumers have long been familiar with the term “beta testing.” Beta/early access software may be available for purchase (although, admittedly, sometimes discounted) or be provided at no cost to existing users. In any case, consumers enrolling in these programs are aware of what they’re getting into.

Over the last decade or so, I feel that users have been gradually exposed to software and services whose initial versions are less and less complete. Some of this is to be expected and encouraged, such as the aforementioned beta and early access programs that have the potential to improve the final product. But clearly the feedback from beta testing programs didn’t feel sufficient to many developers, who started including more and more telemetry in hopes of collecting more feedback, without users having to manually provide it or specifically opt into any study.

I believe the really objectionable situations are those where the barrenness or incompleteness is initially obscured and then, if users are lucky, iterated upon, at a pace which is often unpredictable. It makes product reviews and testimonials become outdated, and thus mostly useless, quickly. This development model is convenient for the developers, as it theoretically represents the least risk, as ideas, features and business models get the chance to be evaluated sooner, at a time when it is easier to pivot. It becomes possible to spread the development of complex features throughout a longer period of time, while collecting revenue and capturing market share earlier in the process.

Unfortunately, from my point of view, there isn’t much in this go-to-market approach that is beneficial for the clients/users/consumers. Particularly for products that are directly paid for (as one-time purchases or as subscriptions), I’ve often felt that one is paying to become an unpaid beta tester and/or an unwilling subject of a focus group study. The notion of simply opting for an alternative is not applicable when the product is quite unique, or when every worthy alternative is doing the same.

Then there is the update fatigue factor. After such an “early launch,” ideally, the inferior initial product is then quickly updated. But this rarely consists of just a couple updates that make lots of changes in one go. Most likely, the audience will gradually receive multiple updates over time, frequently changing the design, feature set and workflow of the product, requiring constant adaptation. Adding to this annoying situation, these updates may then be rolled out in stages, or as part of A/B testing – leading to confusion regarding the features of the product and how to use them, with different users having varied experiences which are not in agreement, which can almost be seen as gaslighting.

It is difficult to harshly criticize product developers that improve their products post launch, be it by fixing mistakes, adding features or improving performance and fitness for particular purposes. I don’t think I would be able to find anyone who genuinely believes that Cyberpunk shouldn’t have been updated past the initial release, and I am certainly not that person either. It’ll be even harder to find someone who can argue that Gmail should have stayed the exact same as its 2004 release… wait, spam filters aside, maybe that won’t be hard at all. You can easily find people longing for ancient Windows versions, too, etc.

Coming back to Cyberpunk, I think most people who played its initial versions (even those that had a great time) will agree, that it should have been released and initially advertised as an “early access” title. For many players, the experience was underwhelming to the point of being below the expectations set even by many prior indie early access titles. Those who had a great time back then (mostly those playing on PC) will probably also agree that, given all the features added since the 1.0 release, that version might as well have been considered an “early access” one too. Hence why I argue that the problem is not necessarily with the strategy to launch early and update often, but really with the intention of doing so without properly communicating that expectation.

One must wonder if Cyberpunk would have been so critically acclaimed and reached such a complete feature set, years later, if it had released in a more presentable state. I can imagine an alternate universe where the 1.0 version of the game releases with no more than the generally considerable acceptable number of bugs and issues, which get fixed in the next two or three patches over the course of a couple months. The game receives the deserved critical acclaim (that it received anyway – that was controversial too, but I digress) and because it releases in a good state, CDPR never feels pressured into making major changes to add back cut features or to somehow “make up for mistakes.” The end result would be a game where there are maybe one or two DLCs available for purchase, but owners of the base version don’t really see many changes beyond what was initially published – in other words, the usual lifecycle of non-controversial games.

It is possible that there is now a bit of a perverse incentive, to release eagerly awaited games – and possibly other products – in a somewhat broken and very incomplete state that excels only in particular metrics – in the case of Cyberpunk, those metrics would be the story and worldbuilding. This, only so that they can then remain in the news cycle for years to come, as bugs are fixed and features are added, eventually receiving additional critical acclaim as they join the ranks of games with impressive redemption arcs, like No Man’s Sky and Cyberpunk did. To be clear, I think it would be suicidal to do this on purpose, but the truth is that generating controversy then “drip-feeding” features might become more common outside of live service games.

A constant influx of changes to a product can cause frustrate consumers and make it difficult to identify the best option available in the market. Cynics might even say that that’s a large part of the goal: to confuse consumers to the point where they’ll buy based purely on brand loyalty and marketing blurbs; to introduce insidious behavior in software by shipping it gradually across hundreds of continuously delivered updates, and making it impossible to distinguish and select between “known good” and “known bad” releases.

I must recognize that this ability to update almost anything at any time is what has generally made software more secure, or at least as secure as it’s ever been, despite threats becoming more and more sophisticated. For networked products, I will easily choose one that can and does receive updates over one that can’t, and I am even more likely to choose one I can update myself without vendor cooperation.

Security has been a great promoter of, and/or excuse for, constant updates and the discontinuation of old software versions and hardware models. The industry has decided, probably rightly, that at least for consumer products, decoupling security updates from feature changes was unfeasible, and it has also decided, quite wrongly in my view, that it was too unsafe to give users the ability to load “unauthorized” software and firmware. This is another decision that makes life easier for the developers, and has no upsides that I can think of, for users. In some cases, the lack of security updates has even been pushed as a way to sell feature updates. For that upselling strategy to work, it’s important that users can’t develop and load the updates/fixes themselves.

I am sure that people in marketing and sales departments worldwide will argue that forcing the pushing of feature updates onto users is positive, using happy-sounding arguments like “otherwise they wouldn’t know what they’re missing out on.” I am sure I’ve seen this argument being made in earnest, perhaps more subtly, but: it should be obvious to everyone outside of corporate, and more specifically outside of these departments, that in practice this approach just reduces user choice and is less respectful of users than the alternative. Funnily enough, despite being used as a punching bag throughout this essay, Cyberpunk is one of the products that, despite a notable amount of feature updates, also respects user freedom the most, as the game is sold DRM-free and CDPR makes all earlier versions of the game available for download through GOG. And this is a product whose purpose is none other than entertainment – now if only we had the same luxury regarding things which are more essential to daily life.

The truth is that – perhaps because of the lack of alternatives in many areas, or simply because of a lack of awareness and education – the public has either decided to accept the mandatory and often frequent feature updates, or decided that they have no option but to accept them. This is where I could go on a tangent about how, despite inching closer and closer every year (in recent years, largely thanks to Valve – thanks Valve!), the year of the Linux desktop probably won’t ever come – but I won’t, that will be a rant for another time; you see, it’ll be easier to write when desktops aren’t a thing anymore.

Until this “release prematurely, and force updates” strategy starts impacting profits – and we’re probably looking at decade-long cycles – it won’t be going away. And neither will this increasingly frequent feeling that one is paying to be a beta tester – be it directly with money, through spending our limited attention, or by sharing personal and business data. The concept of the “patient gamer,” who waits months or even multiple years after games release to buy them at their most complete (and often cheapest) point, might just expand to an increasing number of “patient consumers” – often, much to the detriment of their IT security.

Casio Prizm software

When I was in high school, I learned a lot about embedded systems and reverse engineering by building add-ins for the Casio Prizm series of graphical calculators (fx-CG 10/20/50/whatever new models they’ve released since then). These “add-ins” were third-party software, not sanctioned by Casio, running native code on proprietary purpose-built hardware and firmware, interacting with APIs reverse engineered by hacker communities. My projects were quite ambitious; if you are curious about what they were, you can see their READMEs ([1], [2], [3]). These were my first serious experience with the C programming language and with embedded systems programming.

As of 2025, all of that took place over ten years ago. Since then, Casio released multiple, very different, OS versions, and even new calculator models which may use similar OS platforms. I don’t know for sure – I haven’t kept up with the subject in, quite literally, a decade.

All of my Prizm add-ins are now unsupported, especially on current OS versions and calculator models. They have not been tested on these versions and I am sure that nasty, dangerous things are happening behind the scenes which could damage your calculator. I know for sure that this is the case, only because sometimes I receive emails from lost souls complaining about unexpected behavior that definitely wasn’t occurring back when I made these add-ins, and sometimes they even mention calculator models that I have never heard about. It is possible to permanently brick these calculators through the use of incompatible or improperly programmed add-ins (I know from first-hand experience, why do you ask?).

To reflect this, the downloads for these add-ins are no longer available from my websites. They may still be floating around on the internet, on third-party websites to which they were uploaded by me or others. I strongly discourage their use for the reasons detailed above.

I do not wish to receive further communications about these add-ins. While I am proud of the work I did at the time, and am happy to reminisce and discuss the general work and challenges that went into building them, I can not and will not help you regarding any concrete problems with them, including:

  • Bugs, crashes, slowness, incompatibilities, missing features or any other technical or emotional problems you may have encountered while using or attempting to use them;
  • Trouble finding where to download these add-ins;
  • Trouble compiling or understanding their source code.

I have stopped developing for the fx-CG series years ago, I no longer follow the custom add-in development scene, my add-ins will not receive further updates, and I can’t help you. Their source code is available on GitHub for anyone who wants to be inspired by their code, subject to the terms of their respective licenses. Unfortunately, the development platform / compiler stack that was in use at the time I built them was quite fragile and honestly janky, and much of the code wouldn’t work right if e.g. compiled with slightly wrong linker settings. I have no idea how to fix the code to work with whatever SDK is used nowadays, nor with recent compiler/linker versions, nor how to find the ancient SDKs and settings used to compile them, etc.

After years explaining all of the above in my replies to the occasional email about these add-ins, I have recently decided to care even less and just leave these emails unanswered. Please realize that even if I were to reply, it would be out of courtesy – those replies would not contain any useful information.

Thank you for your understanding.

TIME for a WTF MySQL moment

Many people have been experiencing strange time perception phenomenon throughout 2020, but certain database management systems have been into time shenanigans for way longer. This came to my attention when a friend received the following exception in one of his projects (his popular Discord bot, Accord), coming from the MySQL connector being used with EF Core:

MySqlException: Incorrect TIME value: '960:00:00.000000'

Not being too experienced with MySQL, as I prefer PostgreSQL for reasons that will soon become self-evident, for a brief moment I assumed the incorrection in this value was the hundreds of hours, as one could reasonably assume that maybe TIME values were capped at 24 hours, or that a different syntax was needed for values spanning multiple days, and that one would need to use, say, “40:00:00:00” to represent 40 days. But reality turned out to be more complex and harder to explain.

With checking the documentation being the most natural next step, the MySQL documentation goes:

MySQL retrieves and displays TIME values in 'hh:mm:ss' format (or 'hhh:mm:ss' format for large hours values).

So far so good, our problematic TIME value respects this format, but the fact that hh and hhh are explicitly pointed out is already suspect (what about values with over 999 hours?). The next sentence in the documentation explains why, and left me with even more questions of the WTF kind:

TIME values may range from '-838:59:59' to '838:59:59'.

Oooh Kaaay… that’s an oddly specific range, but I’m sure there has to be a technical reason for it. 839 hours is 34.958(3) days, and the whole range spans exactly 6040798 seconds. The documentation also mentions the following:

MySQL recognizes TIME values in several formats, some of which can include a trailing fractional seconds part in up to microseconds (6 digits) precision.

Therefore, it also makes sense to point out that the whole interval spans 6 040 798 000 000 microseconds, but again, these seem like oddly specific numbers. They are not near any power of two, the latter being between 242 and 243, so MySQL must be using some awkward internal representation format. But before we dive into that, let me just point out how bad this type is. It is the closest MySQL has to a time interval type, and yet it can’t deal with intervals that are just a bit over a month long. How much is that “bit”? Not even a nice, rounded number of days, it seems.

To make matters worse, it appears that the most popular EF Core MySQL provider maps .NET’s TimeSpan to TIME by default, despite the fact that TimeSpan can contain intervals in the dozens of millennia (it uses a 64 bit integer and has 10-8 s precision) compared to TIME’s measly “a bit over two months”. This is an issue other people have run into, and the discussion in that issue includes a “This mimics the behavior of SQL Server” remark, which made me go check and, sure enough, SQL Server’s time is meant to encode a time of day and has a range of 00:00:00.0000000 through 23:59:59.9999999, something which overall makes more sense to me than MySQL’s odd TIME range.

So let’s go back to MySQL. What is the reasoning behind such an interesting range? The MySQL Internals Manual says that the storage for the TIME type has changed with version 5.6.4, having gained support for fractional seconds in this version. It uses 3 bytes for the non-fractional type. Now, had they just used these 3 bytes to encode a number of seconds, they would have been able to support intervals spanning over 2330 hours, which would already be a considerable improvement over the current 838 hours maximum, even if still a bit useless when it comes to mapping a TimeSpan to it.

This means their encoding must be wasting bits, probably so it is easier to work with… not sure in what circumstances exactly, but maybe it makes more sense if your database management system (and/or your conception of what the users will do with it) just loves strings, and you really want to speed up the hh:mm:ss representation. So, behold:

1 bit sign (1= non-negative, 0= negative)
1 bit unused (reserved for future extensions)
10 bits hour (0-838)
6 bits minute (0-59) 
6 bits second (0-59) 
---------------------
24 bits = 3 bytes

This explains everything, right? Well, look closely. 10 bits for the hour… and a range of 0 to 838. I kindly remind you that 210 is 1024, not 838. The plot thickens. I’m not the first person to wonder about this, of course, this was asked on StackOverflow before. The accepted answer in that question explains everything, but it almost didn’t, as it initially dismisses the odd choice of 838 as “backward compatibility with applications that were written a while ago”, and only later it is explained that this choice had to do with compatibility with MySQL version… 3, from the times when, you know, Windows 98 was a fresh operating system and Linux wasn’t 10 years old yet.

In MySQL 3, the TIME type used 3 bytes as well, but they were used differently. One of the bits was used for the sign as well, but the remaining 23 bits were an integer value produced like this: Hours × 10000 + Minutes × 100 + Seconds; in other words, the two least significant decimal digits of the number contained the seconds, the next two contained the minutes, and the remaining ones contained the hours. 223 is 83888608, i.e. 838:86:08, therefore, the maximum valid time in this format is 838:59:59. This format is even less wieldy than the current one, requiring multiplication and division to do basically anything with it, except string formatting and parsing – once again showing that MySQL places too much value on string IO and not so much on having types that are convenient for internal operations and non-string-based protocols.

MySQL developers had ample opportunities to fix this type, or at the very least introduce an alternative one that is free of this reduced range. They changed this type twice from MySQL 3 until now, but decided to retain the range every time, supposedly for compatibility reasons. I am struggling to imagine the circumstances where increasing the value range for a type can break compatibility with an application – do types in MySQL have defined overflow behaviors? Is any sane person writing applications where they are relying on a database type’s intrinsic limits for validation? If yes, who looked at this awkward 838 hours range and thought of it as an appropriate limitation to carry unchanged into their application’s data model? At this point, I don’t even want to know.

Despite having changed twice throughout MySQL’s lifetime, the TIME type is still quite an awkward and limited one. That unused, “reserved for future extensions” bit is, in my opinion, really the pièce de résistance here. Here’s hoping that one day it will be used to signify a “legacy” TIME value and that, by then, MySQL and/or MariaDB will have support for a proper type like PostgreSQL’s INTERVAL, which has a range of +/- 178000000 years and a very reasonable microsecond precision.

See the comments on this post on Hacker News

The limitations of hiding limitations: a striped case study

This GitHub repo, created just 5 hours before this post, shot to the top of Hacker News quite fast (see the thread). Its content is a readme containing a demonstration of the limitations of current artificial intelligence applications, specifically, the algorithms employed in Amazon product search, Google image search and Bing image search, by showing that searching for “shirt without stripes” does, in fact, bring up shirts, both with stripes and without.

At the time of writing, the brief but clever document can be seen as a mocking criticism of these systems, as it links to the pages where the three companies boast about their “broadest and deepest set” of “cutting-edge” “responsible” AI. I took these words from their pages, and of course you can’t tell which came from where, adding to the fun.

Some of the comment threads on the Hacker News submission caught my attention. For example, this comment thread points out the possible discrimination or bias apparently present in those systems, as doing a Google image search for “person” showed mostly white men to that user. This other thread discusses whether we should even apply natural language processing to a search query. In my opinion, both threads boil down to the same problem: how to manage user expectations about a computer system.

Tools like Google and Bing have been with us for so long, and have been improved to such a point, that even people who work in IT, and have a comparatively deep understanding of how they work, often forget how much of a hack they really are. We forget that, in many ways, Google is just an extremely advanced, web-scale successor to good old grep. And we end up wondering if there is ethnic or gender discrimination in our search results, which there probably is, but not because Google’s cyborgs are attracted to white men.

When web search was less perfect, when you needed to tinker with your search query multiple times to even get close to the results you wanted, it was very easy to see how imperfect those systems are, and we adjusted our expectations accordingly. Now, we still need to adjust our queries – perhaps even more often than before, as some Hacker News commenters have suggested – but the systems are much fuzzier, and what ends up working feels more random to us humans than it once did. Perhaps more interestingly, more users now believe that when we ask Google for something, it intrinsically understands the concepts of what we mentioned. While work is certainly being done to ensure that is the case, it is debatable whether that will even lead to better search results, and it’s also debatable whether those results should be “unbiased”.

Often, the best results are the biased ones. If you ask your AI “personal assistant” for the weather, you expect the answer to be biased… towards your current location. This is why Google et al. create a “bubble” for their users. It makes sense, it’s desirable even, that contextual information is taken into account as an additional, invisible argument to each search. Programmers are looking for something very specific when they search the web on how to kill children.

Of course, this only makes the “shirt without stripes” example more ridiculous: all the information on whether to include striped shirts in the results is right there, in the query! It does not need any sort of context or user profiling to answer this search query “correctly”, leading to the impression that indeed these systems should be better at processing our natural language… to the detriment of people who like to treat Google as if it was grep and who would use something more akin to “shirt -stripes”, which, by the way, does a very good job at not returning shirts with stripes, at least on a private browsing window opened by me!

Google image search results for “shirt -stripes”. I only see models with skin colors on the lighter side… hmm 🤔

Yes, I used “shirt -stripes” because I “read the documentation” and I am aware of the limitations of this particular system. Of course, I’m sure Google is working towards overcoming these limitations. But in the meantime, what they offer is an imperfect system that seems to understand our language sometimes – just go ahead and Google something like “what is the president of France” – but fails unexpectedly at other times.

And this is the current state of lots of user interfaces, “artificial intelligence”, and computer systems in general. They work well enough to mask their limitations, we have grown accustomed to their quirks, and in their masking, we ultimately perceive them to be better than they actually are. Creating “personal assistants” which are ultimately just a glorified front-end to web search has certainly helped us perceive these tools as more advanced than they actually are, at least until you actually try to use them for anything moderately complex, of course.

Their fakery is just good enough to be a limitation, a vulnerability: we end up thinking that something more is going on, that these systems conspire in their quirks, that some hidden agenda is being pushed by having Trump or Gretta appear more often in search results. Most likely, there are way more pictures of white people on the internet than of any other skin color1, and I’m also sure that Greta has a better modern-day-equivalent-of-PageRank than me or even the Portuguese president, even taking into account the frequency with which he makes headlines. Blame that on our society, not the tools we use to navigate this mess we created.

1 Quite the bold statement to leave without a reference, I know. Of course, it boils down to a gut feeling, a bias, if you will. Whether computer systems somehow are more human by also being biased, is something that could be discussed… I guess I’ll try to explore this in a future blog post I’ll never get to actually write.

Developing for Android is like being a (demonetized) YouTuber

Many are aware that some YouTubers are unhappy with how YouTube operates. But are you aware that Android app developers go through similar struggles with Google Play? Let me try and explain everything that’s wrong with Android in a single 20 minutes read.

Android was once considered the better choice of mobile platform for those looking for customizability, powerful features such as true multitasking, support for less common use cases, and higher developer freedom. It was the platform of choice in research and education, because not only are the development tools free and cross-platform, Android was also a very flexible operating system that did not get in the way of experimenting with innovative concepts or messing with the hardware we own. This is changing at an increasingly faster pace.

While major new Android versions used to bring features that got both users and developers excited, since a few versions ago, I dread the moment a new Android version is announced and I find myself looking for courage (heh) to look at the changelogs and developer guidelines for it. And new Android versions are not the only things that make my heart beat faster for the wrong reasons: changes to Google Play Store policies are always a fun moment, too.

Before we dive in any further, a bit of context: Android was not the first mobile OS I used; references to my experiences and experiments with Windows Mobile 6.x are probably scattered around this blog. I started using Android at a time when 4.2 was the latest version, I remember 4.4 being announced shortly after, and that was the version my first Android phone ran until the end of its useful life. Android was the first, and so far only, mobile operating system for which I got seriously invested in app development.

I started messing with Android app development shortly before 6.0 Marshmallow was released, so I am definitely not an old timer who can say he has seen Android evolve from the beginning, and certainly not from the perspective of a developer. Still, I feel like I have witnessed a decade of changes – in big part, because even during my “Windows Mobile experiments” era, I was paying attention to what was happening on the Android side, with phones I couldn’t yet afford to buy (my Windows Mobile “Pocket PCs” were hand-me-downs). I am fully aware of how bad Android was for both users and developers in the 4.x and earlier eras, in part because I still had the opportunity to use these versions, and in part because my apps had to support some of them.

API deprecation and loss of backwards compatibility

With every Android version, Google makes changes to the Android APIs. These APIs are how apps interact with the operating system, and simplifying things a bit, they pretty much define what apps can and can’t do. On top of this, some APIs require permissions, which you agree to when you install apps that use them, and some of these permissions can be allowed or denied by the user as he runs the app (of course, the app can refuse to run if the permissions are denied, but the idea is that it will degrade gracefully and provide at least some functionality without them). This is the case for the APIs that access your contact list or your location.

New Android versions include new APIs and, in the past, barely any changes were made to APIs introduced in previous versions. This meant that applications designed with an older version in mind would still work fine, and developers did not need to immediately redesign their apps with new versions in mind.

In the past two to three years, new Android versions have also began removing APIs and changing how the existing ones work. For example, applications wishing to stay active in the background now have to display a permanent notification, an idea which sounds good in theory, but the end result is having a handful of permanent notifications in your drawer, one for each application that may need to stay active. For example, I have two in my phone: one for the call recorder, and another for the equalizer system. One of my own apps also needs to have a similar notification in Android 8/Oreo and newer, in order to reliably perform Wi-Fi scans to locate the user in specific locations.

In the upcoming Android version 10/Q, Google intends to restrict even more what apps can do. They are removing the ability for apps to access the clipboard, killing an entire category of clipboard management apps (so that you can have a history of what you copied, so that you can sync the clipboard with your other phones and computers, etc.). Currently, all apps can access the clipboard without special permissions, but the correct way to solve this is to add a permission prompt, not to get rid of the API entirely. Applications can no longer turn the Wi-Fi on or off, which prevents automation apps from e.g. turning off the Wi-Fi when you’re driving. They are thinking of entirely preventing apps from accessing arbitrary files in “external storage” (SD cards and the area of internal memory on your phone where screenshots and camera pictures go, and where you put your MP3s, game ROMs for emulation, etc.).

Note that all of these things that they are removing for “security”, could simply be gated around a permission prompt you’d have to accept, as with the contact list, or location. Instead, they decided to remove the abilities entirely – even if users want these features, apps won’t be able to implement them.  Existing apps will probably be review-bombed by users who don’t understand why things no longer work after updating to the shiny new Android version.

These changes to existing APIs mean more for users and developers. Applications that worked fine until now may stop working. Developers will need to update their apps to reflect this, implement less user-friendly workarounds, explanation messages, and so on. This takes time, effort, money etc. which would be better spent actually fixing other issues of the apps, or developing new features. For small teams or solo developers, especially those doing app development as a hobby or as a second job, catching up with Google’s latest “trends” can be insurmountable. For example, the change to disallow background services meant that I spent most of my free time during one summer redesigning the architecture of one of my apps, which in turn introduced new bugs, which had to be diagnosed, corrected, etc., and, in the end, said app still needs to show a notification to work properly in recent Android versions.

There are other ways Google can effectively deprecate APIs and thus limit what applications can do, without releasing new Android versions or having to update phones to them. Google can decide that apps that require certain permissions will no longer be allowed on the Play Store. Most notably, Google recently disallowed the SMS and Call Log permissions, which means that apps that look at the user’s call log or messaging history will no longer be allowed on the store.

Apps using these permissions can still be installed by downloading their APKs directly or by using alternative app stores, but they will no longer be allowed on the Play Store. This effectively means that for many apps, the version on the Play Store no longer contains important functionality. For example, call recorders are no longer able to associate numbers with the recordings, and automation apps can no longer use SMS messages as a trigger for actions. Because Google Play is where 99% of people get their apps, this effectively means functionality requiring these permissions is now disallowed, and won’t be available except to a extremely small minority of users who know how to work around these limitations.

The Google Play Store is the YouTube of app developers

Being on the Play Store is starting to feel much like producing content for YouTube, where policy changes can be sudden and announced without much time in advance. On YouTube, producers always have to be on the lookout for what might get a video demonetized, on top of dealing with content claims, both actions promoted by entirely automated, opaque systems. On the Play Store, we need to be constantly looking out for other things that might suddenly get our app pulled or our developer account banned – together with the accounts of everyone who Google decides has anything to do with us:

And this is just a tiny sample, not even the “best of”, of the horrifying stories that are posted to r/androiddev, every other day. For each of these, there are dozens in the respective “categories”. Sometimes the same stories, or similar ones, also make the rounds in Hacker News. It seems Google is treating Play Store bans and app removals with the same or worse flippancy that online games ban players suspected of cheating. Playing online games isn’t the career of most people who do it, but Android app development is, which leads to the obvious question, what do people do when they are banned?

After writing this, I realize my YouTube analogy is terrible. You see, on YouTube generally one receives strikes, instead of waking up one day to suddenly see their account banned. YouTubers also have the opportunity to profit from the drama caused by the policy changes by “reacting” to them, for example. And while YouTubers typically have the sympathy of their viewers, app developers have to deal with user outrage – because users have no idea, or don’t care, about why we’re being forced to massively degrade the performance and features of our apps. For example, the developer of ACR, a popular call recorder, had to deal with bad app reviews, abuse and profanity among thousands of emails from outraged users after removing the call log permission, and this was after an extensive campaign warning users of the upcoming changes (as a user of ACR, I uninstalled the Play Store version and installed the “unchained” version, which keeps the call log features, through XDA Labs).

As a freelance developer or as a small company, developing for Android is riskier than ever. I can start working on an app idea today and it’s possible that in six months, when it is ready for the initial release, changes to the store policy will have rendered my app unpublishable or have severely affected its functionality… in addition to the aforementioned point about APIs deprecating and changing semantics, requiring constant upkeep of the code to keep up with the latest versions.

If you opened the links above, by now you have probably realized another thing: user support with actual humans is non-existent, and if only their bots were as responsive as Google Assistant… And, if they are not bots, then they are humans which only spit out canned responses, which is just as bad. It is widely known that the best method for getting problems solved with regards to Google Play listings, is to catch the attention of a Google employee on social media.

It seems the level of support Google gives you is correlated to how many people will read your rants about your problems with their platforms. And it’s an exponential correlation, because being big isn’t enough to get a moderate level of support; you must be giant. This is a recurring problem with most Google services, especially if you are not using G Suite (apparently, app developers do not count as “paying customers” when it comes to support). Of all the things I’d like the EU to regulate (and especially, to not regulate, but that’s a story for a different time), the obligation for these mega-corporations to provide actual user support is definitely one of them.

Going back to the probably flawed YouTube analogy, there’s one more parallel to draw: many people believe that in recent years, YouTube has been making changes to both policies, business models and the “algorithm”, that heavily favor the big, already-established creators and make it hard for smaller ones to ever be successful. I believe we are seeing a similar trend on the Google Play Store – just keep in mind you must not analyze an app’s popularity or “level of establishment” by the number of downloads or active users, but by how much profit it generates in ad revenue and IAP cuts.

“Android is open source”

“Android is open source” is the joke of the year – for the fifth consecutive year. While it is true that the Android Open Source Project (AOSP) is still a thing, many of the components that make Android recognizable and usable, both from an end user and developer’s perspective, are increasingly closed source.

Apps made by Google are able to do things third-party apps have trouble replicating, no doubt due to the tight-knit interaction between them and the proprietary behemoth that is Google Play Services. This is especially noticeable in the “Google” app itself, Google Assistant, and the Google launcher.

If you install an AOSP build, many things will be missing and many apps – my own ones included – will have trouble running. Projects looking to provide “de-googlified” versions of Android have developed extensive open source replacements for many of the functions provided by Google Play Services. The fact that these replacements had to be community-developed, and the fact that they are very much necessary to run the majority of the popular applications, show that nowadays, Android can be considered open source as much as in the sense that it can be considered a Linux distro.

AOSP, on its own, is effectively controlled by Google. The existence of AOSP is important, if nothing else, to define common APIs that the different “OEM flavors” of Android must support – ensuring, with minor caveats, that we can develop for Android and not for “Samsung’s Android” or “Nokia’s Android”. But what APIs come and what APIs go is completely decided by Google, and the same is true for the overall system architecture, security model, etc. This means Google can bend AOSP to their will, stripe it of features and move things into proprietary components as much as they want.

Speaking of OEMs and inter-device compatibility, it’s obvious that this push towards implementing important functionality in Google Play Services and making the whole operating system operate around Google’s components has to do with keeping the “OEM flavors” under control. A positive effect for users and developers is that features and security patches become available even on devices that don’t receive OEM updates, or only receive updates for the major Android version they came with, and therefore would never receive the new features in the latest major release. A negative effect is that said changes can affect even old Android versions overnight and completely at Google’s discretion, much like restrictions on what APIs and permissions apps on the Play Store are allowed to use.

Google’s guiding light when it comes to Android openness seems to gravitate towards only opening the Android source as much as necessary for OEMs to make it run on their devices. We are not at that extreme point – mainly because the biggest OEMs have enough leverage to prevent that from happening. I feel that at this point, if Google were able to make Android entirely closed source, they would do it. I wonder what future Fuschia holds for us in this regard.

So secure you can’t use it

The justifications for many of the changes in later Android versions and Google Play policies usually fall into one of two types: “security” and “user experience”, with the latter including “battery life”. I’m not sure for whom Google is designing their “user experience” in recent years, but it certainly isn’t for “proficient users” like me. Let’s, however, talk about security first.

Security should be proportionally strong to what it is protecting. With each major Android version, we see a bigger focus on security; for example, it’s becoming harder and harder to root a phone, short of installing a custom ROM that includes superuser functionality from the start. One might argue this is desirable, but then you notice security and privacy have also been used as the excuse to disallow the use of certain permissions like the call log and messaging access, or to remove APIs including the external storage one.

This increase in security strength makes sense: security is now stronger because we are also storing more valuable information in our phones, from “old-fashioned” personal information about us and our acquaintances, to biometric information like fingerprint, facial and retinal scans. Of course, and this is probably the part Google et al. are most worried about, we’re also storing entire payment systems, the keys for DRM castles, and so on.

Before finishing my point about security, let’s talk a bit about user experience. User experience is another popular excuse for making changes while limiting or altogether removing certain features. If something has to be particularly complicated (or even “insecure”) in order to support the use cases of 1% of the users, it often gets simplified… while the “particularly complicated” or “insecure” system is stripped entirely, leaving the aforementioned 1% with a system that no longer supports their use cases. This doesn’t sound too bad, right? However, if you repeat the process enough times, as Google is bound to do in order to keep releasing new versions of their software (so that their employees can get their bonuses), tying the hands of 1% of the users at a time, you are probably going to be left with something that lets you watch ads only… and probably Google ads at that, I guess. You didn’t need to make phone calls, right? After all, the person on the other side might be pulling a social engineering scheme on you, or something…

Strong security and good user experience are hard to combine together. It seems that permission prompts do not provide sufficient security nor acceptable user experience, because apparently it’s easier to remove the permissions altogether than to let users have a choice.

User choice is what all of this boils down to, really. Android used to give me the choice of being slightly insecure in exchange for having more powerful and innovative features in the apps I install, than in the competing mobile platforms. It used to give me the choice of running 10 apps in the background and having my battery last half a day as a result, but now, if I want to do so, I must deal with 10 ongoing notifications. I used to be able to share files among apps as I do on my desktop, but apparently that is an affront to good security too. I used to be able to log the Wi-Fi networks in my vicinity every minute, but in Android 9 even that was limited to a handful of scans per hour, killing some legitimate use cases including my master’s thesis project in the process. Fortunately, in academia we can just pretend the latest Android version is 8.

Smart cards, including SIM cards, were invented to containerize the secure portion of systems. Authentication, attestation, all that was meant to be done there, such that the bigger system could be less secure and more flexible. Some time in the last two decades, multiple entities decided it was best (maybe it provided “better user experience”?) that important security operations be moved into the application processor, including entire contactless payment systems. Things like SafetyNet were created. My argument in this section goes way beyond rooting, but if my phone is rooted and one of the apps to which I granted root permission steals my banking details, … apparently the banking app shouldn’t have been allowed to run in the first place? Imagine if the online banking of my bank refused to open on my desktop because it knows I know the password for the administrator account.

Still on the topic of security, by limiting what apps distributed on the Play Store are allowed to do and ending support for legitimate use cases, Google ends up encouraging side-loading (direct APK download and installation). This is undesirable from a security point of view, and I don’t think I need to explain why.

Our phones are definitely more secure now, but so much “security” is crippling the use cases of people who do more than binge-watch YouTube and their social network feeds. We should also keep in mind that many people are growing up with smartphones and tablets alone, and “just use your desktop for those advanced tasks” is therefore not an answer. It’s time for my retarded proposal of the week, but what about not storing so much security-sensitive stuff in our phones, so that we don’t need so much security, and thus can actually get back the flexibility and “security pitfalls” we had before? Android, please let me shoot myself in the foot like you used to.

Lack of realistic alternatives

This evolution of Android towards appealing to the masses (or appealing to Google’s definition of what the general public should be allowed to do) would not worry me so much if, as a user, I had a viable mobile OS alternative. On the Apple side, we have iOS, whose appeal from the start was to provide a “it just works”, secure platform, with limited flexibility but equally limited margin for error. Such a platform is actually a godsend for many people, who certainly make up the majority of users, I don’t doubt. Such a platform doesn’t work for me, because as I said, I need to be able to shoot myself in the foot if I want to: let me have 2 hours of battery life if I want, let my own apps spy on my location if I want.

This was fine for many years, because we had Android, which let us do this kind of stuff. It just so happens that because of AOSP, and because there were no other open source or licensable platforms with traction, Android ended up being the de-facto standard for every smartphone that isn’t an Apple one. On the low-end, Android is effectively the only option. Of course, this led to Android having the larger market share. Since “everyone” uses it now, there’s pressure to copy the iOS model of “it just works” and “safe for people with self-harm tendencies” – you can’t hurt yourself even if you wanted.

Efforts to introduce an Android competitor have been laughable, at best. Windows Phone/Windows Mobile failed in part because of a weak and possibly too late entry, combined with a dubious “vision” and bad management decisions on Microsoft’s part. In the end, what Microsoft had was actually good – if there weren’t the case, there wouldn’t be still plenty of die-hard WP/WM fans – but getting there so late (and with so many mixed signals about the future of the platform) means developers were never sufficiently captivated, and without the top 100 apps in there, users won’t find the platform any good, no matter how excellent it is from a technical standpoint. Obviously, it does not help that a significant number of those “top 100 apps” are Google properties; in fact, the only reason Google has their apps on iOS is because, well, iOS was there already when they arrived on the scene.

If even a big player with stupid deep pockets like Microsoft can’t introduce a third mobile platform, the result of smaller-scale attempts like Firefox OS is quite predictable. These smaller attempts have an additional problem, which is finding hardware to run on. It doesn’t help that you can’t change the OS on a phone the same way you can on a PC. In fact, in the long gone year of 2015, I was already ranting about the lack of standardization in smartphone hardware. It’s actually fun to go back at that post, made when Android 4.4 was the latest version, and see how my perception of Android has changed.

I should also note that if a successful Android alternative appears, it will definitely run Android apps, probably through a compatibility layer. In a way, Android set the standard for apps much in the same way that 15 years ago, IE6 was setting web standards in the worst way possible. Did someone say antitrust?

Final thoughts

Android, and therefore Google, set the standard – and the implementation – for what we can and can’t do with a smartphone, except when Apple introduces a major innovation that OEMs and Google are compelled to quickly implement in Android. These days, it seems Apple is stalling a bit in innovation in the smartphone front, so Google is taking the opportunity to “innovate” by making Android more similar to iOS, turning it into a cushioned, limited, kid-safe operating system that ties the hands of developers and proficient users.

Simultaneously, Google is solving the problem of excessive shovelware and even a bit of malware on the Play Store, by adding more automation, being even less open about their actions, and being as deaf as ever. Because it’s hard to understand whether apps are using certain permissions legitimately or not – and because no user shall be trusted to decide that by themselves – useful applications, from call recording tools, to automation, to literally any app that might want to open arbitrary files in the user storage, are being “made impossible” by the deprecation and removal of said permissions and APIs.

We desperately need an Android alternative, but the question of who will develop, use and target said alternative remains unanswered. What I know, is that I no longer feel happy as an Android developer, I no longer feel happy as an Android user, and I’m not likely at all to recommend Android to my friends and family.

Edited at 2:56 March 28th UTC to add clarification about Android clipboard access.

See the discussion for this article on Hacker News, r/AndroidDev, r/Android

I really like Discord. It’s a monster, it scares me

…and it’s also the next Steam.

Dear regular readers: we all know I’m not a regular writer, and you were probably expecting this to be the second post on the series about internet forums in 2018. That post is more than due by now – at this rate it won’t be finished by the end of the year – even though the series purposefully never had any announced schedule. I apologize for the delay, but bear with me: this post is not completely unrelated to the subject of that series.

Discord, in case you didn’t know, is free and proprietary instant messaging software with support for text, voice and video communication – or as they put it, “All-in-one voice and text chat for gamers that’s free, secure, and works on both your desktop and phone.” Launched in 2015, it has become very popular among gamers indeed – even though the service is definitely usable and useful for purposes very distant from gaming, and to people who don’t even play games. In May, as it turned three years old, the service had 130 million registered users, but this figure is certainly out of date, as Discord earns over 6 million new users per month.

If you have ever used Slack, Discord is similar, but free, easier to set up by random people, and designed to cater to everyone, not just businesses and open source projects. If you have ever used Skype, Discord is similar, but generally works better: the calls have much better quality (to the point where users’ microphones are actually the limiting factor), it uses less system resources than modern Skype clients on most platforms, and its UI, stability and reliability doesn’t get worse every month as Microsoft decides to ruin Skype some more. You can have direct conversations with other people or in a group, but Discord also has the concept of “servers”, which are usually dedicated to a game, community or topic, and have multiple “channels” – just like IRC and Slack channels – for organizing conversations and users into different topics. (Beware that despite the “server” name, Discord servers can not be self-hosted; in technical documents, servers are called “guilds”).

Example of Slack bot in action. Image credit: Robin Help Center

Example of Slack bot in action. Image credit: Robin Help Center

Much like in Slack (and, more recently, Skype, I believe), bots are first-class citizens, although they are perhaps not as central to the experience as in many Slack communities. In Discord, bots appear as any other user, but with a clearly visible “bot” tag, and they can send and receive messages like any other user, participate in text in voice chats, perform administrative/moderation tasks if given permission… to sum it up, the only limit is how much code is behind each bot.

Example of Discord bot in action

Example of Discord bot in action. Discord bots can also join voice channels, e.g. to play music.

I was introduced to Discord by a friend in the end of 2016. We were previously using Skype, and Discord was – even at the time – already clearly superior for our use cases. I found the “for gamers” aspect of it extremely cheesy, so much that for a while it put me off of using it as a Skype replacement. (At the time, we were using Skype to coordinate school work and talk about random stuff, and at the time, I really wasn’t a “gamer”, on PC or any other platform). I finally caved in, to the point where I don’t even have Skype start with my computers anymore, and the Android app stays untouched for weeks – I only open it to talk to the two or three people who, despite heavy encouraging, didn’t switch to Discord. It’s no longer the case, but the only thing Discord didn’t have back then was screen sharing, but it was so good that we kept using it and went with makeshift solutions for screen sharing.

As time went by, I would go on to advocate for the use of Discord, join multiple servers, create my own ones and even build a customized Discord bot for use in the UnderLX Discord server. Discord is pleasant to use, despite the fact that it tends to send duplicate messages under specific terrible network conditions – the issue is more prominent when using it on mobile, at least on Android, over mobile data.

Those who have been following what I say on the internet for longer, might be surprised that I ended up using and advocating for the use of a proprietary chat solution. After posts such as this one where I look for a “free, privacy friendly” IM/VoIP solution, or the multiple random forum posts where I complain that all existing solutions are either proprietary and don’t preserve privacy/prevent data collection, or are “for neckbeards” for being unreliable or hard to set up, seeing me talk enthusiastically about Discord might make some heads spin.

I suppose this apparent change of heart is fueled by the same reason why many people, myself included, use the extremely popular digital store, DRM platform (and wannabe Discord competitor… a topic for later) Steam: convenience. It’s convenient to use the same store, launcher and license enforcer for all games and software; similarly, it’s convenient to use the same software to talk to everyone, across all platforms, conversation modes, and topics. It’s an exchange of freedom and privacy for convenience.

Surprise, surprise: it turns out that making a free-as-in-freedom, libre if you prefer, platform for instant messaging that provides the desired privacy and security properties, in addition to all the features most people have come to expect from modern non-free platforms like Facebook Chat or Skype, while being as easy to use as them, is very difficult. Using the existing popular platforms does not involve setting up servers, sharing IP addresses among your contacts, dealing with DDoS attacks against those servers or the contacts themselves, etc. and for an alternative platform to succeed, it must have all that, and ideally be prepared to deal with the friction of getting everyone and their contacts to use a different platform. It was already difficult in 2013 when I wrote that post, and the number of hard-to-decentralize features in the modern chat experience didn’t stop growing in these five years. The technology giants are not interested in developing such a platform, and independent projects such as Matrix.org are quite promising but still far from being “there”. And so everyone turns to whatever everyone else is using.

In my opinion, Discord happened to be the best of the currently available, viable solutions that all my friends could actually use. It is, or was, a company and a product focused on providing a chat solution that’s independent from other products or larger companies, unlike Messenger, Hangouts or Skype, which come with all the baggage from Facebook, Google and Microsoft respectively. Discord, despite having the Nitro subscription option that adds a few non-essential features here and there, is basically free to use, without usage limits – unlike Slack, which targets company use and charges by the user.

List of Discord Nitro Perks in the current stable version of Discord.

List of Discord Nitro Perks in the current stable version of Discord. Discord is free to use, but users can pay $4.99/month or ten times that per year to get access to these features.

What about sustainability, what is Discord’s business model? To me it was painfully obvious that Nitro subscriptions couldn’t make up for all the expenses. Could they just be burning through VC money only to die later? Even by selling users’ data, it wasn’t immediately obvious to me that the service would be sustainable on its own. But I never thought too much about this, because Discord is super-convenient, and alternative popular solutions run their own data collection too, so I just shrug and move on. If Discord eventually ran out of money, oh well, we’d find an alternative later.

Back to praising the product, Discord is cross-platform, with a consistent experience across all platforms, and can be used in both personal/informal contexts and work/formal contexts. In fact, Discord was initially promoted to Reddit communities as a way to replace their inconvenient IRC servers, and not all of those communities were related to gaming. If only it didn’t scream “for gamers” all over the place…

I initially dismissed this insistent targeting of the “gamers” market as just a way to continue the segmentation that already existed… after all, before Discord there was TeamSpeak, which was already aimed at gamers and indeed primarily used by them. By continuing to target and cater to this very big niche, Discord avoided competing head-to-head with established players in the general instant messaging panorama, like the aforementioned Skype, Facebook Messenger and Hangouts, and also against more mobile-centric solutions like WhatsApp or Telegram.

I believed that at some point, Discord would either gradually drop the “chat for gamers” moniker, or introduce a separate, enterprise-oriented service, perhaps with a self-hosting option, although Slack has taught us that isn’t necessary for a product to succeed in the enterprise space. This would be their true money-maker – after all, don’t they say the big money is on the enterprise side of things? Every now and then I joked, half-seriously, “when are they going to introduce Discord for Business?”

I was half-serious because my experience using Discord, a supposedly gaming-oriented product, for all things non-gaming, like coordinating an open source project or working remotely with my colleagues, was superb, better than what I had experienced in my admittedly brief contact with Slack, or the multiple years throughout which I used Skype and IRC for such things. The “for gamers” aspect was really a stain in what is otherwise a product perfectly usable in formal contexts for things that have nothing to do with playing games, and in some situations stopped me from providing my Discord ID and suggesting Discord as the best way to contact me over the internet for all the things email doesn’t do.

These last few days, Discord did something that solved the puzzle for me, and made their apparent endgame much more clear. It turns out their focus on gaming wasn’t just because the company behind Discord was initially a game development studio that had pivoted into online chat, or because it was a no-frills alternative to TeamSpeak (and did so much more), nor because it was an easy market to get into, with typically “flexible” users that know their way around installing software, are often eager to try new things, use any platform their parents are not on, and share the things they like with other players and their friends. I mean, all of these could certainly have been factors, but I think there’s a bigger thing: it turns out Discord is out to eat Steam’s (Valve’s) lunch. Don’t believe me? Read their blog post introducing the Discord Store.

In hindsight, it’s relatively obvious this was coming, in fact, I believe this was the plan all along. It’s a move so genius it must have been planned all along. Earn the goodwill of the gamer community, get millions of gamers who just want a chat client that’s better than what Steam and Skype provide while being as universal as those among the people they want to talk to (i.e. gamers), and when the time is right, become a game store which just happens to have the millions of potential clients already in it. It’s like organizing a really good bikers convention, becoming famous for being a really good bikers convention, and then during one year’s edition, ta-da! It’s also a dealership!

The most interesting part about all this, in my opinion, is that Discord and Steam’s histories are, in a way, symmetrical. Steam, launched in 2003, was created by Valve – initially a game development company – as a client for their games. Steam would evolve to be what’s certainly the world’s most recognizable and popular cross-platform software store and software licensing platform, with over 150 million users nowadays (and this number might be off by over 30 million). As part of this evolution, Steam got an instant messaging service, so users could chat with their friends, even in-game through the Steam overlay. After a decade without major changes, a revamped version of the Steam chat was recently released, and it’s impossible not to draw comparisons with Discord.

The recently introduced Steam Chat UI

The recently introduced Steam Chat UI. Sure, it’s much nicer, and you can and should draw comparisons, but it’s no Discord… yet.

I had the opinion that Steam could ditch its chat component altogether and just focus on being great at everything else they do (something many people argue they haven’t been doing lately), and I wasn’t the only one thinking this. We could just use Discord, whose focus was being a great chat software, and Steam could focus on being a great store. But now, I completely understand what Valve has done, and perhaps their major failure I can point out right now was simply taking too long to draft a reply. Because, on the other, “symmetrical” side of the story…

Discord was developed by Hammer & Chisel, recently renamed Discord Inc., a game development studio founded in 2012, which only released one unsuccessful game before pivoting into what they do now – which used to be developing an instant messaging platform, but apparently now includes developing an online game store too. Discord, chat software that got a store; Steam, a store that got chat functionality, both developed by companies that are or once were into game development. Sadly, before focusing on the game store part of things, Discord, Inc. seems to have skipped the part where they would publish great games, their sequels, and stop as they leave everyone asking for the third iteration.

It is my belief that it was not too long after Discord became extremely successful – which, in my opinion, was some time in 2016 – and a huge amount of gamers got on it, that they set their eyes on becoming the next Steam. It’s not just gamers they are trying to cater to, as they started working with game developers to build stuff like Rich Presence long ago, not to mention their developer portal was always something focused not just on Discord bots, but applications that authenticate against Discord and generally interact with it. This certainly helped open communication channels with some game developers, which may prove useful to get games on their store.

Discord is possibly trying to eat some more lunches besides Valve’s, too. Discord Nitro (their subscription-based paid tier, which adds extra features such as the ability to use custom emoji across all servers or upload larger files in conversations) has always seemed to me as a poor value proposition, but I obviously know this is not the universal opinion, as I have seen multiple Nitro subscribers. Maybe it’s just that I don’t have enough disposable income; anyway, Nitro just became more interesting, as now “It’s kinda like Netflix for games.” From what I understand, it’ll work a bit like Humble Monthly, but it isn’t yet completely clear to me whether the games are yours to keep – like on Humble Monthly – or if it’s more like an “extended free weekend” where Nitro users get to play some games for free while they are in rotation. (Update: free games with Discord Nitro will not be permanent)

This Discord pivot also presents other unexpected ramifications. As you might now, on many networks all game-related stuff (like Steam) is blocked, even though instant messaging and social networks are often not blocked as they are used to communicate with clients, suppliers, or even between co-workers, as is the case with Slack. I fear that by introducing a store, Discord will fall even more into the “games” bucket, and once it definitively earns the perception of being a games-only thing, it’ll be blocked in many work and school networks, complicating its use for activities besides gaming. The positive side of things is that if they decide launching that enterprise version, this is an effective way of forcing businesses to use it instead of the free version, as the “general populace” version will be too tightly intertwined with the activity of playing games.

I’ll be honest… things are not playing out the way I wish they would. Discord scares me because now I feel tricked and who knows what other tricks they have up their sleeve. I would rather have an awesome chat and an awesome store, provided separately, or alternatively, an awesome chat and store, all-in-one. (And if the Discord team reads this, they’ll certainly say “but we’re going to be the awesome chat and store, all-in-one!”) But at this rate, we’ll have two competing store-and-chat-platforms… because we didn’t have enough stores/game clients or instant messengers, right?

Fortunately, the charging one has been solved now that we've all standardized on mini-USB. Or is it micro-USB? Shit.

Because of course this one had to be here, right? I could also have added a screenshot of Google’s IM apps, but I couldn’t bother finding screenshots of all of them, let alone installing them.

You can of course say, “just pick one side and your life will be simpler”, but we all know this won’t be the case. Steam chat is a long way from being as good as Discord, and the Discord store will certainly take its time to be a serious Steam competitor. Steam chat will never sound quite right for many of Discord’s non-officially assumed use cases; for example, even if Steam copies all of Discord’s features and adds the concept of servers/guilds, it’ll never sound quite right to have the UnderLX server on Steam, will it? (Well… unless maybe UnderLX pivots into something else as well, I guess). Similarly, I’ll be harder to “sell” Discord’s non-gaming use cases by telling people to ignore the “for gamers” part, as I’ve been doing, if Discord is blatantly a game store and game launcher.

Of course I’ll keep using Discord, but I’ll probably not recommend it as much now, and of course I’ll keep using Steam, and mostly ignoring its chat capabilities – even because most people I talk to are not in there, and most of those that are, are also on Discord. But for now, I’ll keep the games tab on Discord disabled, and I seriously hope they’ll keep providing an option to disable all the store/launcher stuff… so I can keep hiding the monster under the bed.

Internet forums in 2018: are they really dying?

In the introductory post to this series about the state of internet forums, I mentioned that, to me at least, forums felt like a relic of the past, a medium many internet users will never experience, and that many forums were seeing a downwards trend in the amount of activity in the past few years. But is this just a personal feeling supported by anecdotes, or is this really a general situation?

In the previous post, I also said that these posts would be subjective texts posted to a personal blog, not scientific studies. However, I thought it would be interesting to use these posts to do some introspection and try to understand where this feeling that “forums are dying” comes from. After all, it might just be that I and my circle of friends are abandoning forums, and (in a possibly correlated way) the forums we used to frequent are also dying, while the big picture is quite different. There’s also the possibility that this trend is limited to certain cultures, regions of the world, or specific forum topics/themes.

To do this “introspection”, I’ll be going through some of the forums I know about, and maybe even some I don’t know about, to see how they are doing. I have good news: you can completely skip this lengthy post and you probably will still enjoy the rest of the series. Yeah, this series will still be primarily anecdote-based, but at least we will have looked at a slightly larger number of anecdotes – and we will have taken a deeper look at them. Yay for small sample sizes. Let’s start with the ones I know about.

FreeVPS

This forum was founded in April of 2010, and it is the first forum I remember being a part of, although I know for sure I signed up for other forums before that one – also related to web hosting, I just don’t remember their name anymore, and I’m sure that they have not been online for years now. I was one of the first 20 members of that forum, and I ended up being a quite relevant member, because I was a moderator there for over a year, between 2011 and 2012 (or even the start of 2013). I actually got through some “drama” together with the rest of the team, involving forum ownership/administration changes.

I put “drama” in quotes, because to this day I’m still not sure whether the things we were seeing as extremely serious threats were actually that serious… keep in mind that, at the time at least, most of the staff was under 20, perhaps even under 18. I know that at some point legal paperwork was flying around against us, someone wanted to trademark FreeVPS and take the forums from us, or something like that. Now I find all of this a bit cringe-worthy, but I learned quite a lot of stuff (from systems administration, to project management, time management and other soft-skills). Anyway, let’s set the nostalgia aside…

This was actually the first forum I “abandoned”, in the sense I stopped going there as frequently or participating there as much as I initially did. And thus it was only while I was “researching” for this post that I found out that this community, too, is in trouble. The activity levels are way, way down than back in the “golden days”, with what seems to be an average of five posts per day – and sometimes a day goes by without a post. Back in those “golden days”, there would be over 100 posts per day, so much that the statistics page of the forum still indicates an all-time average of 73 posts/day.

Cemetech

This one is not dead, it’s just… trying to find a way to evolve, maybe reinvent itself a bit, in an attempt to bring back the glory of the earlier days. Founded in 2000 – just barely after I learned the basics of how to read Portuguese (let alone write English) – Cemetech (pronounced KE’me’tek) is a community focused on graphing calculators and, to a lesser degree, DIY electronics, and other… nerd stuff. At least, this is how I saw it back when I joined in November 2011. To be fair, of all things this community had to offer and discuss, I really only ever cared about Casio Prizm discussions. After I moved on from Casio Prizm development – and at a time when forum activity was already decreasing – I tried to foment discussions on my Clouttery project, with mild but disappointing success.

Cemetech is a bit of a strange beast because, at least in my view, it hinges a bit too much on the interests, activities and projects of its founding member KermM and other staff, which to me at least, always seemed to be close friends with each other. For one example, I felt like a lot of attention was given to Casio calculators and especially the Prizm models at a time when KermM was “fed up” with TI for some reason, but once TI started releasing new models, with color screens and other features debuted by the Prizm, Casio calculators were slowly forgotten in that community. I’m sure they didn’t do this on purpose – for many years, until the Prizm was launched and caught the eyes of the staff, the community was much more TI-focused than Casio-focused, and it wasn’t unexpected to see it return a bit to its roots. Perhaps I should rephrase my sentence where I said it hinges too much on the staff’s interests… what I mean is, the staff there used to be a discussion promoter, posting a lot on every thread; since KermM kind of left to pursue his PhD and then to be a founder of a startup, Geopipe, activity levels took a hit – at least, and again, from my point of view.

This is a quite anecdotal example… not only is the community this “strange beast”, it’s also focused on the quite niche topic of graphing calculators – even though it always had so many more topics, it was calculator stuff that brought in the most people and the most posts. As a hacker’s platform, graphing calculators are dying, due to new exam regulations in multiple countries imposing restrictions on their hardware and software, and sometimes doing away with this kind of calculators entirely.

It isn’t surprising that a forum about a dying niche subject would be dying. Cemetech now seems down to about 40 posts per day… wait, that doesn’t seem very low. But if you look at the forum index alone, you’ll see that many sections have not even seen a new post in this month, and the number of topics with active discussion seems much lower to me than it once was. Oh well, maybe it’s just me – this is a subjective blog post, after all.

CodeWalrus

Yet another community focused on graphing calculators – at least initially. This one is much more recent than most of the communities around this topic, but it too has been struggling with lack of activity. In my opinion, it never had that much activity to begin with, but there’s no doubt it went through some rough months – fortunately, it seems to be speeding up a bit now.

CodeWalrus was founded in October 2014… and I’m not sure how to explain this, and I’ll probably get it wrong, but it was founded by people previously associated with Omnimaga (yet another calculator community) – including its founder – but who no longer wanted to be involved in Omnimaga. This older community is, too, pretty much inactive these days (see stats). As for CodeWalrus, which also has a stats page, things look way brighter. At least, you certainly can not accuse the members of not trying to cheer things up.

Before I move on to talking about another forum, note how I said that CodeWalrus was initially more focused on graphing calculators (by virtue of the interests of the members, not because that was the topic imposed by the administration). Well, their strategy – from the very first day – for catering to people with other interests seems to have paid off. A brief look at the active topic lists shows no posts related to graphing calculators… wow.

XDA Developers

I don’t think this community needs to be introduced to my readers. It’s a giant community and I can’t quite take its pulse, unlike what I did with the other anecdotes in this post. With over eight million members, its scale is completely different from the other forums I mentioned so far. It isn’t a dying forum, and that’s why I brought it here: to show that definitely not all forums are dying. Or maybe only big forums survive? We shall look into that in future posts.

This forum might not be dying, but it could still be interesting to see whether the variation in number of posts and sign-ups is positive or negative. I searched, and searched, and could not find a live statistics page, or any up-to-date report. With a forum so big, it’s quite possible that computing these kinds of statistics in a real-time fashion is simply unfeasible. However, certain forum views still show the current totals at the bottom. With the help of Internet Archive’s Wayback Machine, we can plot stuff over time…

Take from that what you will… but its growth doesn’t seem to be decelerating, even if it apparently isn’t as active as it was in 2012-2013. It could also have happened that despite the slight decrease in the speed at which new posts are added, the quality of the discussion has improved, with an overall better result.

SkyscraperCity

Still on the big forums league, SkyscraperCity – a community with about one million members and over 100 million posts that claims to be the world’s biggest community on “skyscrapers and everything in between”, founded in 2002. In practice, it’s a forum about urbanism that, along many international discussions, also hosts some regional sub-forums where all kinds of stuff is discussed. For example, in the Portuguese forum, you can find topics ranging from architecture and urbanism to transportation and infrastructures, and also general topics about what’s going on in the country as well as completely off-topic stuff like discussion about what’s on TV. It is at SkyscraperCity that you can find the most forum-based discussion around the Lisbon subway, for example – with over 3000 posts per year about that subject alone.

I only participate, precisely, in the Lisbon subway discussions at that forum, so even though I vaguely know about the other sub-forums and topics, I have no idea if the forum is more or less active than it used to be. Using, again, the Wayback Machine, let’s take a look at the evolution in the number of members and total posts in the last 10 years.

This graph is even less interesting than the XDA one. The number of members and posts has been growing essentially linearly. There is a slight deceleration in the last three to four years, especially in the member count, but that could be due to better spammer detection systems (something as simple as a better captcha system could have that effect).

SkyscraperCity is yet another forum that, despite using antiquated software and not having the best availability or reliability history, doesn’t seem to be going anywhere. It isn’t displaying “exponential user growth” like investors and shareholders like so much to hear. But hey, one of the nice things about forums, is that generally they don’t need to boost user counts like that, because they don’t typically have shareholders to report nice numbers to.

LowEndTalk

Looks like we’re back to the subject of web hosting. LowEndTalk is the discussion forum of LowEndBox, a website that lists deals on low-spec virtual and dedicated private servers. I never participated or followed this forum in any way, but I have known about its existence and have had a good idea of its “dimension” for almost as long as I have known about FreeVPS.

LowEndTalk makes my job a bit hard because, as far as I can see, they don’t list the total number of members nor the total number of posts anywhere. Fortunately, they do show the total number of threads (or “discussions”, as their forum software calls them), rounded to the nearest hundred. Let’s see if we can get any sort of trend out of this. Wayback Machine to the rescue…

Judging by the thread counts alone, looks like LowEndTalk is yet another forum that isn’t going anywhere, with over 7000 new threads being added each year. The chart gives the impression that growth is slightly more linear than it actually is: the rise in the number of threads is actually slowing down a bit, from an average of about 8700 threads/year in 2013-2015 to about 7500 threads/year from 2015 to the present. Nothing to worry about.

You might be wondering why I brought LowEndTalk here, since it’s not an especially big or especially well-known forum, and it’s not a forum I frequent, either. The reason is that during my “research” for this post, a few things caught my eyes when looking at LowEndTalk.

For a start, their forum software (Vanilla Forums) is not one I commonly see in the wild, or at least one that I recognize – despite the fact that according to Vanilla Forums, they are “Used by many of the world’s leading brands”. At least in its LowEndTalk incarnation, it is extremely “clean”, simple and still good looking. It’s not “responsive” web design, but it’s very readable and fast. It’s definitely different from your run-of-the-mill SMF/phpBB/MyBB-powered forum. But is it better? We shall look into this in a future post.

LowEndTalk only became a “traditional forum” in 2011. Before, they were a Q&A website (like StackOverflow, for example) powered by OSQA. Using the Wayback Machine, it’s easy to see that some of the hot topcis were discussions about moving from OSQA to something more appropriate to their needs. Therefore, this community has yet another interesting quirk: they were not born as a “traditional forum”, they actually moved into one after the community was already bootstrapped and a Q&A model was deemed inappropriate.

LowEndTalk have recently set up a Discord “server”. Unlike what seemed to be the plan at CodeWalrus at one point, it is apparently not their intention to move discussion out of the forum and into Discord. Still, I think this is an interesting point for a future post about forum alternatives – what are people moving to, after all?

Rockbox Forums

Including this one here is pretty ridiculous, but I thought I’d do so anyway, even if only as some sort of honorable mention to the thousands of small forums hosting communities of developers and users of open source projects. I feel that these kinds of forums, along with self-hosted issue trackers (such as Flyspray or Trac) were a much more common sighting in the distant past before GitHub.

Anyway, what is Rockbox and their forums? Rockbox is an open-source firmware for that ancient thing smartphones made us forget about, the MP3 player. You know, of the old-school iPod-with-clickwheel kind.

I used Rockbox for a couple years on the 2nd iPod Nano my parents won in some sort of raffle and which they never really heavily used. iTunes made it a PITA to use it; my family (unlike me) isn’t that much into listening to music and maintaining a music library; and finally, my dad was into PDAs and Pocket PCs way before the iPhone even launched.

For us, the thought of playing music on a MP3 player was already kind of strange even when that iPod was brand new – why go through the hassle of using iTunes, transcoding music files, having to charge and carry around yet another device… when we could just take a full-size SD card, insert it into one of these ginormous Pocket PCs (which also made phone calls) and listen to our existing MP3s and WMAs (no M4A transcoding required!) using Windows-freaking-Media Player on Windows Mobile (or any other player of our liking, since those phones could run arbitrary EXEs compiled for Windows Mobile, and many player application alternatives existed).

Anyway, to finish telling yet another personal story of my life: I found out about Rockbox at a point when the iPod was already forgotten in a drawer. Long story short, that iPod got 500% more use ever since I installed Rockbox on it. Rockbox can even run Doom and has a Gameboy/Gameboy Color emulator, how awesome is that? (And no, Rockbox is not Linux-powered, but there was a separate project which ported Linux to some older iPods.) Sadly the capacitors on my iPod’s screen have failed and only 30% of the screen is readable now, making it a bit hard to use the device.

I would still use that iPod to this day if it weren’t for that malfunction (which I don’t feel like spending $10 on a new screen to fix): the iPod has just 4 GB of storage, but Rockbox can play Opus, the awesome audio format that’s simply the best in terms of quality/filesize ratio (and I hope this sentence will look extremely dated in 10 years). I can fit the relevant part of my music library in there by converting over 10 GB of high-quality MP3, M4A and FLAC into less than 4 GB of Opus at 96 kbit/s – which sounds great.

Rockbox, as you might guess, is pretty much dead these days. It never got to the point where it supported the most recent MP3 players or the latest iPod models, thanks to Apple and their locked bootloaders. MP3 players became a niche/audiophile product as people moved on to smartphones and the demand for them dropped; the prices went up as a result. Perhaps most importantly, if it weren’t for the fact that it could support more devices, Rockbox is a finished product: it can play any relevant music format (including tracker music), it has everything you could possibly want from a music player in terms of sound effects/adjustments, playlist control and library browsing. And now for something that might take you off-balance: Rockbox is a project by Haxx – yes, the same Haxx of the extremely popular and successful program curl! In fact, there is a noticeable overlap between both development teams. Both are awesome pieces of software.

I just realized this post was supposed to be about “forums” and not “alternative firmware for embedded devices and opinions about sound formats”… shit. Anyway, Rockbox is dead and, surprise! the forums are pretty much dead too, for obvious reasons – it would be quite interesting if the forums outlived the open source project they were built around, but it doesn’t seem like this will be the case, and I never saw it happen.

Conclusions

Maybe forums aren’t dying left and right like I thought they were. But they don’t seem to be growing as fast as the rest of the internet. IDK, at this point I’m just making stuff up because I don’t know how fast “the internet” grows. Perhaps it has to do with that shareholder thing, forums don’t usually have to report inflated numbers to anyone. Or maybe people are really locked down inside the bubble-decorated walled-garden of the EVIL ZUCC. Yeah, let’s pretend that is the case, so that my plans for the following posts are not completely foiled. As for this pointless anecdote-based post, its end is here.

BTW: did you know I started working on this post on the 7th of March… only to leave it rotting unfinished since the 9th of March… and finally finish it today? But it’s fine: this way, I could include a ZUCC reference while people are still outraged at Facebook, and before they go back to using it again happily ever after.

Thoughts about internet forums in 2018: a series of posts

Internet forums… websites and communities where I spent thousands of hours. To me, they already feel like relics of the past. I’m 21 and began using the internet about 10 years ago, maybe more. I’m sure the majority of kids who are now starting to browse the web (probably at earlier ages than I was) will not participate in a classic forum, perhaps not even land there through some random search. You know, the kind of discussion websites where there are threads, posts, moderators and rules, things are typically ordered in chronological order, and reactions to posts (“votes”, “likes”, and similar) are either unavailable or something added to the system as an add-on rather than a core mechanic.

In recent years, even I – someone who used to be staff on a medium-sized forum, FreeVPS – have withdrawn from most forums where I used to actively participate. Many of my friends who know what a forum is, either never got to participating in one, or are now participating much less than they used to do two, three, four years ago.

In recent months, it has come to my attention that some of the forum-based communities I frequented are facing serious drops in activity levels, sometimes to the point where days go by without a post, where previously there used to be at least one post per hour. While this reduction in activity is in part because the main topics of those forums are becoming more and more niche, I believe a big chunk of the problem has to do with the fact that they are, well, forums. The lack of activity has even led to meta-discussions on new directions and ways to bring life back to these communities – see the threads at Cemetech and CodeWalrus.

This is the first post of a series of blog posts where I’ll look into the state of forums in today’s web, identifying what alternative discussion formats and platforms exist, why users seem to prefer (or hate) them, and what is the role of forums nowadays and what they can still do better than other mediums. With this being a blog and not a scientific study, I will obviously write from my own experience and the experience of my peers, producing subjective, unverifiable but hopefully enjoyable content. I’ll also provide my opinion on why forums are still necessary, and give suggestions on how forum-based communities become more appealing to today’s web users.

With this, I hope to be able to bring some life and schedule to this blog – blogs, which are another medium that is nowhere as popular as it once was… maybe I’ll look a bit into that too. This post will act as the index of the series, so links to new posts will appear here once they are published. Stay tuned.

Series Index

A brief mention to my popular project, UnderLX

If I haven’t posted much here, it’s in part because the masters degree I’m pursuing is quite demanding, and in part because my “creative writing” time is often spent replying to interviews for Portuguese national media (and dealing with the exposure, which sadly is not expo$ure) about my UnderLX project, over the last few months:

http://www.sabado.pt/portugal/detalhe/as-perturbacoes-no-metro-de-lisboa-sao-tantas-que-inspiraram-uma-app

https://www.noticiasaominuto.com/pais/937964/linha-mais-problematica-horarios-pico-de-falhas-app-poe-o-metro-a-nu

…and with more to probably come 😉

Advice on Casio Prizm development

A few years ago, I was very active in the Casio Prizm development community, having developed three notable add-ins, contributed to the Prizm wiki, libfxcg (my fork), and even done a bit of reverse-engineering (the calculator OS is closed-source and there is no official SDK), that resulted in the discovery of a couple of syscalls and more detailed documentation on some other ones. Because of this, once in a while I still get messages about my add-ins, which I’m happy to support when possible. Most annoyingly, I also get messages about Prizm development, usually about how to start making add-ins.

Why are these messages annoying? Because I don’t really know how to answer. When I started developing add-ins for the Prizm, I had little to no knowledge of the C programming language, and yet, despite the fact that add-ins can’t make use of all the stuff “normal” C programs can (the libc provided by libfxcg is incomplete; the filesystem uses a different API, there’s no threading, the stack is giant compared to the heap, etc.), I managed to learn it. It certainly helped that I had some previous experience with programming in other languages, even if it was just sloppy code, but I don’t have much of an idea of what to say to someone who intends to learn programming using the Prizm.

I usually end up saying that learning programming using the Prizm it’s a bad idea, probably coming across as extremely discouraging. However, I do hope it’s for the best, and that these people will still learn programming – just not by developing add-ins! Had my first contact with programming been through Prizm add-in development, most certainly I would have chosen other career path than computer engineering. I mean this seriously. I’m glad my first contact with programming was through sloppy Visual Basic code. Anyway, I already wrote a post on my programming experience – it needs updating, but it should do.

Learning how to program, even in a “easy” language and common platform, can be overwhelming; for a programmer that is used to higher-level programming, learning the Prizm, a poorly documented platform with a small developer community, can be overwhelming; combine learning how to program with learning a poorly documented embedded system, and it will most likely be very overwhelming. Of course, nothing will stop someone extremely motivated – hopefully, not even my less encouraging replies, or this blog post.

What follows is the partial reproduction of an email I recently sent, in reply to yet another of these inquiries on how to start developing for the Prizm. I have edited it to make it less specific to the situation of the person I was replying to. I’ll also use the terms “Prizm” and “fx-CG 50” interchangeably, as add-ins built for the former run, with a few exceptions due to sloppy coding (one of mine’s one of these exceptions…), on all Casio Prizm models: fx-CG 10, fx-CG 20 and fx-CG 50.


Do you have any previous programming experience? If not, I honestly do not recommend starting with the Prizm or any other Casio graphical calculator. If yes, then be aware that this is not an “easy” platform to develop for. Either way, here are a few reasons why:

  • Prizm add-ins are written in C or very limited C++ (that might as well be considered C). By today’s standards, these are very low-level languages that require manual memory management and a very good awareness of the machine. They also provide very little protection from programmer mistakes. While some people had C as their first programming language, it is by no means a beginner-friendly language.
  • Even if you already know C well, or if you learn it from any common book, tutorial or course, you’ll be disappointed to find out that much of the standard library is not present, or is insufficiently implemented, in the Prizm calculators.
  • Add-in development for the Prizm was made possible through reverse-engineering and educated guesses based on what was known about previous models like the fx-9860G. While we now understand the essential things about the OS on these calculators, many things are yet to be known.
  • Documentation is lacking and the community is not very large. This essentially means that you won’t be able to just google your way through many problems.
  • Reverse-engineering/documentation and development efforts for the Prizm have basically stalled. You’ll also find many materials that mention the fx-CG 10/20, but since the fx-CG 50 is basically just a faster version of these with a mostly compatible OS (although some things like memory addresses have changed), almost everything you’ll ever need will still apply.

Now, I’m sorry if I came across as dismissive or as discouraging, I’m just trying to make sure you know what’s in front of you.
For Casio Prizm development specifically, this is where I can point you:

Prizm forums at Cemetech
Use these to ask any questions you might have and try to find solutions to any problems you encounter. There are also some guides there, mainly on how to set up the development environment (compiler and such), but I’m afraid they might be a bit out of date. However, as I said, development efforts have mostly stalled, so consider anything from 2014 or early 2015 as up-to-date. Specifically, do not follow the “[HOWTO] Prizm C Development” in there, as it is out of date.

Prizm wiki
This wiki contains much information on the calculator, the reverse-engineered OS functions (“syscalls”) that can be used from add-ins, etc. It also contains more up-to-date instructions on how to set up a development environment.

Personally, I have mostly moved on from Prizm development about three years ago, as I began pursuing a degree in Information Systems and Computer Engineering. Every year or so, I make a short comeback to fix urgent issues with my add-ins and eventually make them compatible with new OS versions and calculator models, as is the case of the fx-CG 50, as long as that does not require too much time/effort. As time passes and I work with other technologies, the more I realize more how “hard” of a platform the Prizm is, and the less motivated I am to build stuff for it again; the fact that I no longer use my fx-CG 20 nearly as much since high school, also doesn’t help.

I’m afraid I can’t help you much more, as I’ve forgotten much of what I knew about the Prizm, both the “theoretical” and “practical” knowledge, and I no longer have practical access to a development environment for it. I tried to put as much of my knowledge as I could into the Prizm wiki before I left, and I believe that the people that now frequent the Cemetech forums will be able to help you much better than I can.


I think that one day I might find some interesting in working on the Prizm again, but perhaps more from a reverse-engineering angle. As for the fun in developing for a constrained, embedded system, there are much more appealing constrained systems out there, like the ESP8266.