I remade the GTA VI trailer in Watch Dogs

I did not want to close the year without adding something to this website, and to keep with the theme of the blog post series I might never finish, here is another post about Watch Dogs… but this one is more of an audiovisual experience.

That’s right: for comedic purposes, I used Watch Dogs to make an high-effort recreation of the first trailer of GTA VI. Despite still being just a trailer, a bunch of rumors, and a vaster-than-usual collection of leaks, that trailer may have hyped and warmed some people’s hearts more than whatever “game of the year” had – and we are talking about a year which had a bunch of very good games releasing. You can tell that piece of media from Rockstar Games tickled something in me too, or I wouldn’t have spent probably over fifty hours carefully recreating it using a game from a different series.

Soon after the trailer dropped, I got the feeling that I wanted to parody it in some way. The avalanche of trailer reaction content that came immediately after its release – including from respectable channels like Digital Foundry, who probably spent as much time analyzing the trailer from a technical perspective, than they spend looking into some actually released games – had me entertain the idea of making a “Which GTA VI trailer analysis is right for you?” sort of meta-analysis meme video. But I realized that making it properly would require actually watching a large portion of that reaction content, and I was definitely not feeling like it. It would also require making a lot of quips about channels and content creators I am not familiar with. Overall, I don’t think it would have been a good use anyone’s time: I wouldn’t have had as much fun making it, and it wouldn’t be that fun to watch.

The idea of recreating the trailer in other games is hardly original, after all, I have heard about at least two recreations of the trailer in GTA V, there’s at least one in GTA San Andreas as well, I hope some have made in Vice City because it just makes sense, and in the same vein as mine, there are also recreations in different game series, including in Red Dead Redemption and in Saints Row. As far as I know, mine is the first one made in Watch Dogs.

I did not intentionally mean anything with the use of a game whose reception was controversial because of trailers/vertical slice demos that hyped people up for something that, according to many, was not really delivered in the final game (hence the nod to E3 2013 at the start – RIP E3, by the way). Nor is the idea here to say that Watch Dogs, a 2014 game, looks as good as what’s pictured in the trailer for GTA VI, a game set to release eleven years later. Largely, I chose this game because, for one, I like Watch Dogs even if I am not the most die-hard fan you’ll find; because it is the only game other than GTA V where I have some modding experience; and because nobody had done it using Watch Dogs.

This was everything but easy to pull off: the game doesn’t even have a conventional photo mode, let alone anything like the Rockstar Editor or Director Mode in GTA V. There aren’t many mods for the games in the Watch Dogs series, especially not the two most recent ones, and the majority of these mods aren’t focused on helping people make machinima. One big exception is the camera tools that I am using, and even that was primarily built for taking screenshots – keep in mind I had to ask the author for a yet-to-be-released version that supported automatic camera interpolation between two points.

I started by recreating just a few shots from the beginning of the trailer. I liked those brief seconds of video so much, and they sparked enough interest in the modding community, that I slowly went through recreating the rest. This required bringing more mods into the equation – including a WIP in-game world editor that was released with light protection measures (probably to avoid people bringing it into online modes or adding it into shitty mod merge packs?) which I had to strip, so I could make it play along the rest of the tools I was using, including some bespoke ones.

Lots of Lua code was injected into the game in the making of this video, and as I said, this is more for comedy and that sense of pride and accomplishment, rather than any sort of game/mod showcase… but I’m happy to report that besides some minor retiming, color grading and artificial camera shake and pan effects, all shots were achieved in-engine with minor visual effects, other than the two shots involving multiple bikes on screen, that required more trickery.

Then there was the careful recreation of every 2D element in Rockstar’s video, including avatars, icons, text placement, and a hours-long search for fonts whose results I am still not 100% happy with. One of the fonts Rockstar used is definitely Arial, but with a custom lowercase Y… I no longer have those notes, but at one point I could even tell you which font foundry was most likely to have supplied the one in question. And did I mention how I also recreated the song cut Rockstar used, so I wouldn’t have to rely on AI separation with all its artifacts?

I think it was while working on the “mud club” shot that I realized I just wouldn’t be able to recreate everything as precisely as I would like. One idea that crossed my mind was to use the infamous spider tank in place of the monster truck in that shot, but I just wasn’t find an easy way to have the spider tank there with the proper look, while still being able to control my mods. Sure, there were multiple technical solutions for it, but that would have meant spending days/weeks just on those two or so seconds of video. I also wouldn’t have been able to find matching animations for the characters. So I decided to take some shots in a different direction that alludes to the setting of Watch Dogs.

Eventually, I let that creative freedom permeate other points of the video. For example, the original “High Rollerz Lifestyle” shot would have been somewhat easy to recreate (the animations for the main character in it notwithstanding) but I felt I had already proven I could recreate easy shots, so I decided to have some fun with it and instead we ended up with “High Hackerz.” Similarly, the final shot features three protagonists instead of two, because I couldn’t decide which one was the most relevant “second character” in the world of Watch Dogs.

The end result seems to have been received to great acclaim, judging by all the public and private praise I’ve been receiving. There are people asking me to continue making this sort of thing, too, which I am not sure is something I want to pursue, especially not on a regular basis – I think a large portion of the fun I had making this, was precisely because this has a sufficiently closed scope and was sufficiently distinct from what I usually do, and I suspect I would have a worse time making more open-scoped machinima, particularly in this game where the tooling is only “limited but functional.”

There are also people asking for this sort of thing done in Watch Dogs 2 rather than in the first game – but there are even fewer mods for that game, and I have even less knowledge of its internals. Judging by the title of Rockstar’s trailer, it’s likely there will be at least a second trailer, so maybe I can combine the wishes of both sets of people by then. It’s probably not something I’ll feel the drive to do, though – it will also depend on how busy I am with life by the time that second trailer releases.

As I was taking care of the last shots and editing tweaks, I was definitely feeling a bit tired of this project, and subconsciously I probably started taking some shortcuts. Looking back on the published result, there are definitely aspects I wish I would have spent some more time on. There is an entire monologue section missing from the trailer which I can pass off as an artistic decision, but the truth is that I only realized I hadn’t recreated/found a replacement for it after the video was up on YouTube. Similarly, for the effort this took, I wish I had captured the game at a resolution higher than 1080p (my monitor’s vertical resolution), because after going through editing (having to apply cropping, zooming, etc.) the quality of the video really suffers in some aspects. But the relevancy of this meme was definitely dropping by the day as time went on, and if I had spent much more time on it, not only would I have been sick and tired of the entire thing, the internet would also have moved on. It is what it is, and once again similarities are found between art and engineering: compromises had to be made.

One thing is for sure, the next video I publish on my YouTube channel is unlikely to live up to these newfound expectations, and I like to think that I have learned enough to deal with that. Meanwhile, and on the opposite note, I hope that 2024 lives up to all of your expectations. Have a great new year!

Musings about Watch Dogs

Before I start, a word about this website. It has mostly sat abandoned, as having a full-time software development job doesn’t leave me with the comparatively endless amounts of free time and mental bandwidth I once had. What remains, in terms of screen time, is usually spent working on other projects or doing things unrelated to software development that require lower amounts (or different kinds) of mental activity, like playing ViDYAgAMeS, arguing with people on Discord, or mindlessly scrolling through a fine selection of subreddits and Hacker News. While I quite enjoy writing, it’s frequently hard to find something to write about, and while I have written more technical posts in the past – this one about MySQL being the most recent example – these often feel too close to my day job. So, for something completely different, here’s some venting about a video game series – this was in the works for over a year, and is my longest post yet. Maybe this time I’ll actually manage to start and finish a series of blog posts.


Watch Dogs is an action-adventure video game series developed and published by Ubisoft and it is not their attempt at an Animal Crossing competitor, unlike the name might suggest. The action takes place in open worlds that are renditions of real-life regions; at the time of writing, there are three games in the series: Watch Dogs (WD1), released in 2014 and set in a near-future reimagination of Chicago; Watch Dogs 2 (WD2), a 2016 game set in a similar “present-day, but next year” rendition of the San Francisco Bay Area, and Watch Dogs: Legion (WDL), a 2020 game set in a… uh… Brexit-but-it’s-become-even-worse version of London. The main shtick of these games, in comparison with others in the same genre, is their heavy focus on “hacking,” or perhaps put more adequately, “an oversimplification, for gameplay and storytelling purposes, of the new delicate information security and societal challenges present in our internet-connected world.”

WD1’s launch menu background is a long video that emulates glitch art (also known as datamoshing) and features key story characters and locations.

The games fall squarely into two categories: “yet another Ubisoft open world game” and what some people call “GTA Clones.” It’s hard to argue against either categorization, but the second one, in itself, has some problems. The three Watch Dogs games came out after the initial release of the latest entry in the Grand Theft Auto series (GTA V in 2013), and GTA VI is yet to be officially announced, so snarky people like me could even say that, if anything, Watch Dogs is a continuation, not a clone, of GTA!

More seriously, there are people on the internet who will happily spend some time telling you how “GTA clone” is a terrible designation that is actually hurting open world games in general, by discouraging developers from making more open world games with a modern setting – and I generally agree with them. But I prefer to attack this “GTA clone” designation in a different way, the childish one, where you point the finger back the accuser and yell “you too!”: GTA Online has, in multiple of its updates, also “cloned” some of the gameplay elements most recently seen in Watch Dogs, and GTA in general has also taken inspiration from different open world games that were released over the years.

“Player Scanner”, a GTA Online novelty. Image source (because I’m too lazy to find a GTA Online session with cooperating players)

In a 2018 update, Rockstar brought a “Player Scanner” to GTA Online, which is reminiscent of the “Profiler” in Watch Dogs games, and in the same update, they also introduced weaponized drones that can be compared to the drone in WD2. More recently, GTA Online received a new radio station whose tracks are obtained from collectibles spread around the world – similar to how the media player track list can be expanded in WD1. I doubt that Watch Dogs was the primary motivation or inspiration for these mechanics, and they were hardly exclusive to Watch Dogs, but the point is that the “cloning” argument can go both ways.

Nowadays, when it comes to open world games, there’s hardly anyone “cloning” a particular game series. Watch Dogs games are GTA competitors, but the same can be said about countless other games, including many that don’t even make use of open world mechanics. None of this negates the fact that, despite not being a “GTA clone,” Watch Dogs ticks all the boxes of said unfortunately named category, for which a better name would totally be “open world games set in a place recognizable as the world we presently live in.” And therefore I won’t hide the fact that many of the comparisons I’ll make will be directly against the two “HD Universe” GTA titles, IV and V, as these are definitely the most well-known and successful games in said category.

I have played through all three games in the Watch Dogs series, on PC. I’m certain I spent more time than the average player in the first two games, having played through both twice, going for the completionist approach the first time I played both of them, and having spent more time than I’d like to admit in the multiplayer modes of WD1 and WD2. By “completionist approach,” I mean getting the progression meter to 100% in the first game, and going for all the collectibles spread around the map in WD2, in addition to completing all the missions. Why? Because, in general, I found their gameplay and virtual worlds enjoyable, regardless of their story or general “theme.”

While players and Ubisoft marketing tend to overly focus on the “hacking” aspect of the series, in my opinion its most distinctive aspect, compared to other open world games, is the fact that more than being a shooter, these can be open world puzzle games, requiring some thought when approaching missions, especially when opting for a stealthier approach. Mainly in the most recent two games, and to some degree in the first one too, there are typically multiple approaches to completing missions, catering to wildly different play styles. This extends even to their multiplayer aspects and adds to the replayability of the games. For example, I went for a mainly “guns blazing” approach on my first WD2 playthrough and settled with a “pacifist” approach when I revisited WD2 for a second time – which, in my opinion, is the superior way to get through the game’s story. But let’s not get ahead of ourselves.

Initially, I was going to write a single post with my thoughts about the three games. As I was writing some notes on what I wanted to say, I realized that a single post would be insufficient – even the individual posts per game are going to be exhaustively long. So I decided to write separate posts, in the order the games have been released, which is also the order I have played them. This post will be about the first Watch Dogs, and the next one will be about its sole major DLC, called Bad Blood.

My notes file for the whole series has over 200 bullet points, so hold on to your seats. Before we continue onto WD1, I just want to mention one more thing: I’m going to assume you have some passing familiarity with the three games, even if you have not played them yourself. I won’t be doing much more series exposition; I mostly want to vent about it, not write a recap. Still, I’ll try to give a bit of a presentation on each thing I’ll talk about, so that those who have played the games before can have a bit of a recap, and so that those who haven’t – but for some reason are still reading this – aren’t left completely in the dark.

Onto what is probably the lengthiest ever rant/analysis/retrospective of WD1. Enjoy!


The Appeal To Celebrity Fallacy

“An appeal to celebrity is a fallacy that occurs when a source is claimed to be authoritative because of their popularity” [RationalWiki]

Today I was greeted by this Discord ping:What I want to talk about is only very tangentially related to what you see above, and the result of some shower thoughts I had after reading that. I did not watch the video, and I do not intend to, just like I haven’t watched most of DarkViperAU’s “speedrunner rambles” or most of his other opinion/reaction videos about a multitude of subjects and personalities. My following of these YouTube drama episodes hasn’t gone much beyond reading the titles of DVAU’s videos as they come up on my YouTube subscriptions feed. What I want to talk about is precisely why I don’t watch those videos and why I think that many talented “internet celebrities” or “content creators” would be better off not making them, and/or why the fans who admire them for their work alone would be better off ignoring that type of content.

OK, I was planning on writing a much longer post but I realized that my arguments would end up being read as “reaction videos and YouTube drama are bad and you’re a bad person if you like them”, which is really not the argument that I want to make here. Instead, let me cut straight to the chase:

Just because you admire someone’s work very much,
that doesn’t mean that you must admire its creators just as much,
nor that you should agree with everything they say
(nor that everything they say and do is right),
and the high quality of some of their work does not necessarily make them quality people nor makes all of their work high-quality.

This is one of those things that is really obvious in hindsight. Yet I often find it hard to detach works from their creator, and I believe this is the case for a majority of people, otherwise the “appeal to celebrity” fallacy would not be so common, and there wouldn’t be so many people interested in knowing what different celebrities have to say in areas that have nothing to do with what made them popular and successful in the first place.

This is not a “reaction/opinion pieces are bad” argument. If someone’s most successful endeavor is precisely to be an opinion maker, then I don’t see why they shouldn’t be cherished for that, and their work celebrated for its quality. But should you not like their work, you’re still allowed to like them as a person, and vice-versa.

DarkViperAU is an example of a “newfound internet celebrity” I admire for much of their work but who is progressively also veering off to a different type of content/work (of the “opinion making” type) which, if I were to pay attention to it, could greatly reduce my enjoyment of the parts of his content that I find great. For me, the subject of today’s ping on his Discord was a great reminder of that, and sent me off in a bit of a shower thought journey.

While I am not fond of end-of-year retrospectives – calendar conventions do not necessarily align with personal milestones – 2020 was definitely the most awkward year in recent times for a majority of the world population. It was an especially awkward year for me, as among many other things, it was when I fell into what I’d describe as an “appeal to a celebrity’s work” fallacy. I initially believed I’d really like to work with people who make a project I admire very much, but over the months I found some of their methods and personalities to really conflict with my personal beliefs, and yet, I kept giving my involvement second chances, because I really felt like the project could use my contribution.

In the end, there’s no problem in liking an art piece exclusively because of its external appearance, even if you are not a fan of the materials nor of some of its authors. And if you think you can improve on that piece of art, expect some resistance from the authors, keeping in mind it might fall apart as you attempt to work on it. Sometimes making your own thing from scratch is really the better option: you might be called an imitator and the end result may even fall short of your own expectations, but you’ll rest easy knowing that you have no one but yourself to blame.

On a more onward-looking note, I wish you all the best for the years to come after 2020. I have a new Discord server which, unlike the UnderLX one, is English-speaking and not tied to any specific project or subject. My hope is to get in there those who I generally like to talk to and work with, so we can all have a great time – you know, the typical thing for a generalist Discord server. I know this is an ambitious goal for just yet another one of these servers, but that won’t stop me from trying. My dear readers are all invited to join Light After Dinner.

TIME for a WTF MySQL moment

Many people have been experiencing strange time perception phenomenon throughout 2020, but certain database management systems have been into time shenanigans for way longer. This came to my attention when a friend received the following exception in one of his projects (his popular Discord bot, Accord), coming from the MySQL connector being used with EF Core:

MySqlException: Incorrect TIME value: '960:00:00.000000'

Not being too experienced with MySQL, as I prefer PostgreSQL for reasons that will soon become self-evident, for a brief moment I assumed the incorrection in this value was the hundreds of hours, as one could reasonably assume that maybe TIME values were capped at 24 hours, or that a different syntax was needed for values spanning multiple days, and that one would need to use, say, “40:00:00:00” to represent 40 days. But reality turned out to be more complex and harder to explain.

With checking the documentation being the most natural next step, the MySQL documentation goes:

MySQL retrieves and displays TIME values in 'hh:mm:ss' format (or 'hhh:mm:ss' format for large hours values).

So far so good, our problematic TIME value respects this format, but the fact that hh and hhh are explicitly pointed out is already suspect (what about values with over 999 hours?). The next sentence in the documentation explains why, and left me with even more questions of the WTF kind:

TIME values may range from '-838:59:59' to '838:59:59'.

Oooh Kaaay… that’s an oddly specific range, but I’m sure there has to be a technical reason for it. 839 hours is 34.958(3) days, and the whole range spans exactly 6040798 seconds. The documentation also mentions the following:

MySQL recognizes TIME values in several formats, some of which can include a trailing fractional seconds part in up to microseconds (6 digits) precision.

Therefore, it also makes sense to point out that the whole interval spans 6 040 798 000 000 microseconds, but again, these seem like oddly specific numbers. They are not near any power of two, the latter being between 242 and 243, so MySQL must be using some awkward internal representation format. But before we dive into that, let me just point out how bad this type is. It is the closest MySQL has to a time interval type, and yet it can’t deal with intervals that are just a bit over a month long. How much is that “bit”? Not even a nice, rounded number of days, it seems.

To make matters worse, it appears that the most popular EF Core MySQL provider maps .NET’s TimeSpan to TIME by default, despite the fact that TimeSpan can contain intervals in the dozens of millennia (it uses a 64 bit integer and has 10-8 s precision) compared to TIME’s measly “a bit over two months”. This is an issue other people have run into, and the discussion in that issue includes a “This mimics the behavior of SQL Server” remark, which made me go check and, sure enough, SQL Server’s time is meant to encode a time of day and has a range of 00:00:00.0000000 through 23:59:59.9999999, something which overall makes more sense to me than MySQL’s odd TIME range.

So let’s go back to MySQL. What is the reasoning behind such an interesting range? The MySQL Internals Manual says that the storage for the TIME type has changed with version 5.6.4, having gained support for fractional seconds in this version. It uses 3 bytes for the non-fractional type. Now, had they just used these 3 bytes to encode a number of seconds, they would have been able to support intervals spanning over 2330 hours, which would already be a considerable improvement over the current 838 hours maximum, even if still a bit useless when it comes to mapping a TimeSpan to it.

This means their encoding must be wasting bits, probably so it is easier to work with… not sure in what circumstances exactly, but maybe it makes more sense if your database management system (and/or your conception of what the users will do with it) just loves strings, and you really want to speed up the hh:mm:ss representation. So, behold:

1 bit sign (1= non-negative, 0= negative)
1 bit unused (reserved for future extensions)
10 bits hour (0-838)
6 bits minute (0-59) 
6 bits second (0-59) 
24 bits = 3 bytes

This explains everything, right? Well, look closely. 10 bits for the hour… and a range of 0 to 838. I kindly remind you that 210 is 1024, not 838. The plot thickens. I’m not the first person to wonder about this, of course, this was asked on StackOverflow before. The accepted answer in that question explains everything, but it almost didn’t, as it initially dismisses the odd choice of 838 as “backward compatibility with applications that were written a while ago”, and only later it is explained that this choice had to do with compatibility with MySQL version… 3, from the times when, you know, Windows 98 was a fresh operating system and Linux wasn’t 10 years old yet.

In MySQL 3, the TIME type used 3 bytes as well, but they were used differently. One of the bits was used for the sign as well, but the remaining 23 bits were an integer value produced like this: Hours × 10000 + Minutes × 100 + Seconds; in other words, the two least significant decimal digits of the number contained the seconds, the next two contained the minutes, and the remaining ones contained the hours. 223 is 83888608, i.e. 838:86:08, therefore, the maximum valid time in this format is 838:59:59. This format is even less wieldy than the current one, requiring multiplication and division to do basically anything with it, except string formatting and parsing – once again showing that MySQL places too much value on string IO and not so much on having types that are convenient for internal operations and non-string-based protocols.

MySQL developers had ample opportunities to fix this type, or at the very least introduce an alternative one that is free of this reduced range. They changed this type twice from MySQL 3 until now, but decided to retain the range every time, supposedly for compatibility reasons. I am struggling to imagine the circumstances where increasing the value range for a type can break compatibility with an application – do types in MySQL have defined overflow behaviors? Is any sane person writing applications where they are relying on a database type’s intrinsic limits for validation? If yes, who looked at this awkward 838 hours range and thought of it as an appropriate limitation to carry unchanged into their application’s data model? At this point, I don’t even want to know.

Despite having changed twice throughout MySQL’s lifetime, the TIME type is still quite an awkward and limited one. That unused, “reserved for future extensions” bit is, in my opinion, really the pièce de résistance here. Here’s hoping that one day it will be used to signify a “legacy” TIME value and that, by then, MySQL and/or MariaDB will have support for a proper type like PostgreSQL’s INTERVAL, which has a range of +/- 178000000 years and a very reasonable microsecond precision.

See the comments on this post on Hacker News

Music albums I like

This quasi-abandoned blog notably has a “Music I like” page, which has not been updated… since four years ago. Not that anyone cares, of course. The reasons why I stopped updating it definitely include the previous sentence, but that could apply to the entirety of this website; in the case of that page particularly, there is a more specific reason. Due to contractual changes with my communication services provider, I had to stop using their streaming service, which was the one I had used the most up until that point, and which provided 10 free track downloads per month. By the way, said streaming service was discontinued later in February 2018 – a move which certainly had nothing to do with the fact that my departure in 2016 allegedly brought them from 6 to 5 monthly active users.

This meant that I no longer had a reason to necessarily find at least 10 tracks to download every month, and the rhythm at which I processed new tracks into my library became even less regular. Checking my library now, it seems I went 6 months without processing new music into my library; it’s entirely possible I found mediocre SoundCloud music sufficient for that period. Eventually, legitimate replacement music sources were found, and my library would continue to grow, now having over 400 additional tracks compared to when that page, which does not contain the entirety of my library, stopped being updated. That’s an average of less than 9 new tracks per month, which means I’m adding less music now than when I had the minimum monthly goal of 10.

I could dump a massive update on the “Music I like page”, to inefficiently inform the world about these 🔥 absolute bangers 🔥👌👌, but I decided there is little point to an endless list of mediocre EDM, house, electronic synth-/indie-/progressive-pop singular tracks. Realistically, it wouldn’t provide any benefit over you just finding some fine examples of these genres with the help of YouTube’s and Spoitify’s recommendation engines, unless you craved the “Music I like” lists specifically because of the more obscure tracks I found, in which case you are just a creepy weirdo.

I realize it’s about time I move towards sharing my musical taste over more widely accepted methods, such as Spotify playlists; the reason why I’m yet to do so is that I often find my music elsewhere, and I don’t feel like manually adding my 1000+ track library to Spotify, searching track by track. Yes, as a programmer I also realize there are probably tools to help with this. Yes, as a programmer I’m also too lazy to bother. Instead of mentioning individual tracks without commentary, I’m going to talk about, review if you will, the albums which I’ve found to enjoy quite a lot over the last couple of years. And by “albums I enjoy”, I mean albums I like to listen to, from beginning to end, without “unfavorite” tracks.

Let’s start with Good Faith, the album released in November 2019 by Madeon. Wikipedia tells me the genre of this album is supposed to be French House, and I’m like, yeah whatever, because this doesn’t quite sound like house to me and it also doesn’t necessarily sound French. This album has a very different style from the tracks I knew from his previous album, Adventure – “You’re On”, “Pay No Mind”, “Finale” and “The City” – to the point where if it didn’t say Madeon on the cover, I’d probably assume it was from a different artist. I’m totally fine with this change, same artist or not, especially because this latest album apparently aligns more with my current taste; I definitely had “unfavorite” tracks in Adventure, while Good Faith is definitely one I listen to from beginning to end.

Madeon · Good Faith

To stress the fact that I’m going through albums in no particular order, I’ll now talk about an album released earlier, in March 2019: Together, by Third Party. This one is much easier to classify: it clearly is a progressive house album. The melodies are great, vocals on the tracks that have them are nice, the lyrics are acceptable – keep in mind that’s about as much praise or criticism any lyrics are going to get from me, after all, I barely pay attention to them and I find anything fine as long as it doesn’t outright promote human rights violations. I suppose the really noteworthy thing is that I enjoy all nine tracks of it, something which I can’t say about most albums produced by progressive house DJs, which– hold on, albums? Exactly, barely anyone in this genre still bothers releasing cohesive albums, and if an album does get made, there’s a high chance it won’t be more than a collection of the artist’s tracks since the last album. I suppose I like this one because it is a proper album, and the tracks are not only individually enjoyable, they also flow well into each other.

Third ≡ Party · TOGETHER

Let’s now go quite a bit more into mainstream land, and by “mainstream” I don’t want to imply I’m some sort of hipster and that Madeon and Third Party are exquisite, obscure artists. What I mean is, “artists that play in top-50 radios around the world”, such as Dua Lipa. Future Nostalgia is her second album, released in March of this wonderful, blessed year, at least as far as critical acclaim for Dua Lipa albums is concerned (Metacritic score of 88/100). This album has very good tracks from start to end, and overall the title describes the genre of the album perfectly: it’s an album full of late 70s, 80s, early 90s hits, but produced in the future. Not the future with flying cars people used to dream about, but a future with exponentially exciting natural developments, and where I have fiber in this neck of the woods; the jury is still out on whether this was a good trade-off, flying cars could have paved the way for innovative drive-in (fly-in?) supermarkets, and would have an infinitely higher breads-per-second throughput than fiber optics, but I digress. If you never listened to Future Nostalgia beyond the “Physical” and “Don’t Start Now” singles brought to you by your advertiser-friendly neighborhood top-50 radio, you are missing out on many other enjoyable songs.

Dua Lipa · Future Nostalgia

Speaking of artists which play in top-50 radios, let’s talk about The Weeknd and his latest After Hours album, released in March this same wonderful year, as far as critical reception of The Weeknd albums is concerned (Metacritic score of 80/100). Not as wonderful, because 80 is less than Dua Lipa’s 88. And rightfully so, because unlike the other albums I’ve mentioned, this is one I must give a hard pass, so much so that I’m bringing it here just to do that.

After Hours has two extremely well produced, extremely successful synthwave/synth-pop hits: “Blinding Lights” and “In Your Eyes”. Aaaand that’s about it as far as my taste is concerned. I gave a quick listen to the rest of the album: too much R&B for my taste, too little synth-pop. I would even go as far as to say that those tracks don’t quite fit in the album, because their style feels so distant from the rest of the tracks. And as much as I love “Blinding Lights”, it has been so overplayed and overused that it is starting to suffer from “Get Lucky” syndrome – remember how some years ago we got too much of that single Daft Punk track while almost nobody cared about the rest of their excellent Random Access Memories album? I remember, and “Blinding Lights” is slowly getting to that point.

The Weeknd · The Weeknd – Blinding Lights

I feel like I also have an obligation to leave this video here:

Leaving top 50 behind, I’m going to make one final recommendation. (Wait. Is this supposed to be a post with music recommendations? Recommendations to myself, I guess.) Released August this year, less than a month ago, BRONSON is the name of the debut album of the collaborative project of the same name, between ODESZA and Golden Features. It has one more capital letter than ODESZA, so that means it must be better. (Unfortunately, it seems some of their “””fans””” didn’t like the new sounds as much, and did some artist harassment. I don’t even.) For my fellow uncultured gamers, ODEZSA has a couple tracks in Forza Horizon 4’s radios and “A Moment Apart” plays in the game’s intro and in the menus.

But back to BRONSON. The album is great, even if I wouldn’t mind if it had a couple more tracks. Quality over quantity, I guess. Much like many of the tracks from ODESZA, it’s hard to define their genre beyond something generic like “electronic”. BRONSON is an interesting case of an album I enjoy almost exclusively as a whole. Many of the tracks aren’t tracks I would listen to on their own. But when played from start to end, I really enjoy it, even the parts that wouldn’t normally fit my taste. And I think that’s really the best way to appreciate and evaluate this album, from start to end, without interruptions. Each track transitions seamlessly to the next, to the point where the gap between tracks introduced by some players becomes quite annoying. I’m really glad I didn’t listen to the singles from BRONSON as they were being released, as I’d probably have ignored the album if I did. The last track features Totally Enormous Extinct Dinosaurs, an artist to which I haven’t paid attention in ages, and which I really need to take some time to yay-or-nay one of these days (I really enjoyed a couple of his tracks some years ago, notably one that was featured in a Nokia commercial for a Windows Phone – that’s how long ago that was).


I suppose this ends my music reviewer roleplay, and I can now go back to enjoying my generic house tracks as recommended by my Discover Weekly playlist on Spotify. There are a few more albums worth mentioning in my library, but I’ll save those for another time. Maybe the destiny of this blog is indeed to go from cosplaying as a music blog in certain pages, to actually becoming one. It’s not like I feel like talking about work, anyway – and who knows what kind of trouble I’d get in with the HR department(s) if I did. My CV hopefully speaks for itself, and this lousy blog adds nothing anyway, even if I added some spectacular “technical posts”.

twenty twenty

time travel terrifyingly trialed: twenty days take twenty months, twenty ticks teared twenty years

The limitations of hiding limitations: a striped case study

This GitHub repo, created just 5 hours before this post, shot to the top of Hacker News quite fast (see the thread). Its content is a readme containing a demonstration of the limitations of current artificial intelligence applications, specifically, the algorithms employed in Amazon product search, Google image search and Bing image search, by showing that searching for “shirt without stripes” does, in fact, bring up shirts, both with stripes and without.

At the time of writing, the brief but clever document can be seen as a mocking criticism of these systems, as it links to the pages where the three companies boast about their “broadest and deepest set” of “cutting-edge” “responsible” AI. I took these words from their pages, and of course you can’t tell which came from where, adding to the fun.

Some of the comment threads on the Hacker News submission caught my attention. For example, this comment thread points out the possible discrimination or bias apparently present in those systems, as doing a Google image search for “person” showed mostly white men to that user. This other thread discusses whether we should even apply natural language processing to a search query. In my opinion, both threads boil down to the same problem: how to manage user expectations about a computer system.

Tools like Google and Bing have been with us for so long, and have been improved to such a point, that even people who work in IT, and have a comparatively deep understanding of how they work, often forget how much of a hack they really are. We forget that, in many ways, Google is just an extremely advanced, web-scale successor to good old grep. And we end up wondering if there is ethnic or gender discrimination in our search results, which there probably is, but not because Google’s cyborgs are attracted to white men.

When web search was less perfect, when you needed to tinker with your search query multiple times to even get close to the results you wanted, it was very easy to see how imperfect those systems are, and we adjusted our expectations accordingly. Now, we still need to adjust our queries – perhaps even more often than before, as some Hacker News commenters have suggested – but the systems are much fuzzier, and what ends up working feels more random to us humans than it once did. Perhaps more interestingly, more users now believe that when we ask Google for something, it intrinsically understands the concepts of what we mentioned. While work is certainly being done to ensure that is the case, it is debatable whether that will even lead to better search results, and it’s also debatable whether those results should be “unbiased”.

Often, the best results are the biased ones. If you ask your AI “personal assistant” for the weather, you expect the answer to be biased… towards your current location. This is why Google et al. create a “bubble” for their users. It makes sense, it’s desirable even, that contextual information is taken into account as an additional, invisible argument to each search. Programmers are looking for something very specific when they search the web on how to kill children.

Of course, this only makes the “shirt without stripes” example more ridiculous: all the information on whether to include striped shirts in the results is right there, in the query! It does not need any sort of context or user profiling to answer this search query “correctly”, leading to the impression that indeed these systems should be better at processing our natural language… to the detriment of people who like to treat Google as if it was grep and who would use something more akin to “shirt -stripes”, which, by the way, does a very good job at not returning shirts with stripes, at least on a private browsing window opened by me!

Google image search results for “shirt -stripes”. I only see models with skin colors on the lighter side… hmm 🤔

Yes, I used “shirt -stripes” because I “read the documentation” and I am aware of the limitations of this particular system. Of course, I’m sure Google is working towards overcoming these limitations. But in the meantime, what they offer is an imperfect system that seems to understand our language sometimes – just go ahead and Google something like “what is the president of France” – but fails unexpectedly at other times.

And this is the current state of lots of user interfaces, “artificial intelligence”, and computer systems in general. They work well enough to mask their limitations, we have grown accustomed to their quirks, and in their masking, we ultimately perceive them to be better than they actually are. Creating “personal assistants” which are ultimately just a glorified front-end to web search has certainly helped us perceive these tools as more advanced than they actually are, at least until you actually try to use them for anything moderately complex, of course.

Their fakery is just good enough to be a limitation, a vulnerability: we end up thinking that something more is going on, that these systems conspire in their quirks, that some hidden agenda is being pushed by having Trump or Gretta appear more often in search results. Most likely, there are way more pictures of white people on the internet than of any other skin color1, and I’m also sure that Greta has a better modern-day-equivalent-of-PageRank than me or even the Portuguese president, even taking into account the frequency with which he makes headlines. Blame that on our society, not the tools we use to navigate this mess we created.

1 Quite the bold statement to leave without a reference, I know. Of course, it boils down to a gut feeling, a bias, if you will. Whether computer systems somehow are more human by also being biased, is something that could be discussed… I guess I’ll try to explore this in a future blog post I’ll never get to actually write.

Developing for Android is like being a (demonetized) YouTuber

Many are aware that some YouTubers are unhappy with how YouTube operates. But are you aware that Android app developers go through similar struggles with Google Play? Let me try and explain everything that’s wrong with Android in a single 20 minutes read.

Android was once considered the better choice of mobile platform for those looking for customizability, powerful features such as true multitasking, support for less common use cases, and higher developer freedom. It was the platform of choice in research and education, because not only are the development tools free and cross-platform, Android was also a very flexible operating system that did not get in the way of experimenting with innovative concepts or messing with the hardware we own. This is changing at an increasingly faster pace.

While major new Android versions used to bring features that got both users and developers excited, since a few versions ago, I dread the moment a new Android version is announced and I find myself looking for courage (heh) to look at the changelogs and developer guidelines for it. And new Android versions are not the only things that make my heart beat faster for the wrong reasons: changes to Google Play Store policies are always a fun moment, too.

Before we dive in any further, a bit of context: Android was not the first mobile OS I used; references to my experiences and experiments with Windows Mobile 6.x are probably scattered around this blog. I started using Android at a time when 4.2 was the latest version, I remember 4.4 being announced shortly after, and that was the version my first Android phone ran until the end of its useful life. Android was the first, and so far only, mobile operating system for which I got seriously invested in app development.

I started messing with Android app development shortly before 6.0 Marshmallow was released, so I am definitely not an old timer who can say he has seen Android evolve from the beginning, and certainly not from the perspective of a developer. Still, I feel like I have witnessed a decade of changes – in big part, because even during my “Windows Mobile experiments” era, I was paying attention to what was happening on the Android side, with phones I couldn’t yet afford to buy (my Windows Mobile “Pocket PCs” were hand-me-downs). I am fully aware of how bad Android was for both users and developers in the 4.x and earlier eras, in part because I still had the opportunity to use these versions, and in part because my apps had to support some of them.

API deprecation and loss of backwards compatibility

With every Android version, Google makes changes to the Android APIs. These APIs are how apps interact with the operating system, and simplifying things a bit, they pretty much define what apps can and can’t do. On top of this, some APIs require permissions, which you agree to when you install apps that use them, and some of these permissions can be allowed or denied by the user as he runs the app (of course, the app can refuse to run if the permissions are denied, but the idea is that it will degrade gracefully and provide at least some functionality without them). This is the case for the APIs that access your contact list or your location.

New Android versions include new APIs and, in the past, barely any changes were made to APIs introduced in previous versions. This meant that applications designed with an older version in mind would still work fine, and developers did not need to immediately redesign their apps with new versions in mind.

In the past two to three years, new Android versions have also began removing APIs and changing how the existing ones work. For example, applications wishing to stay active in the background now have to display a permanent notification, an idea which sounds good in theory, but the end result is having a handful of permanent notifications in your drawer, one for each application that may need to stay active. For example, I have two in my phone: one for the call recorder, and another for the equalizer system. One of my own apps also needs to have a similar notification in Android 8/Oreo and newer, in order to reliably perform Wi-Fi scans to locate the user in specific locations.

In the upcoming Android version 10/Q, Google intends to restrict even more what apps can do. They are removing the ability for apps to access the clipboard, killing an entire category of clipboard management apps (so that you can have a history of what you copied, so that you can sync the clipboard with your other phones and computers, etc.). Currently, all apps can access the clipboard without special permissions, but the correct way to solve this is to add a permission prompt, not to get rid of the API entirely. Applications can no longer turn the Wi-Fi on or off, which prevents automation apps from e.g. turning off the Wi-Fi when you’re driving. They are thinking of entirely preventing apps from accessing arbitrary files in “external storage” (SD cards and the area of internal memory on your phone where screenshots and camera pictures go, and where you put your MP3s, game ROMs for emulation, etc.).

Note that all of these things that they are removing for “security”, could simply be gated around a permission prompt you’d have to accept, as with the contact list, or location. Instead, they decided to remove the abilities entirely – even if users want these features, apps won’t be able to implement them.  Existing apps will probably be review-bombed by users who don’t understand why things no longer work after updating to the shiny new Android version.

These changes to existing APIs mean more for users and developers. Applications that worked fine until now may stop working. Developers will need to update their apps to reflect this, implement less user-friendly workarounds, explanation messages, and so on. This takes time, effort, money etc. which would be better spent actually fixing other issues of the apps, or developing new features. For small teams or solo developers, especially those doing app development as a hobby or as a second job, catching up with Google’s latest “trends” can be insurmountable. For example, the change to disallow background services meant that I spent most of my free time during one summer redesigning the architecture of one of my apps, which in turn introduced new bugs, which had to be diagnosed, corrected, etc., and, in the end, said app still needs to show a notification to work properly in recent Android versions.

There are other ways Google can effectively deprecate APIs and thus limit what applications can do, without releasing new Android versions or having to update phones to them. Google can decide that apps that require certain permissions will no longer be allowed on the Play Store. Most notably, Google recently disallowed the SMS and Call Log permissions, which means that apps that look at the user’s call log or messaging history will no longer be allowed on the store.

Apps using these permissions can still be installed by downloading their APKs directly or by using alternative app stores, but they will no longer be allowed on the Play Store. This effectively means that for many apps, the version on the Play Store no longer contains important functionality. For example, call recorders are no longer able to associate numbers with the recordings, and automation apps can no longer use SMS messages as a trigger for actions. Because Google Play is where 99% of people get their apps, this effectively means functionality requiring these permissions is now disallowed, and won’t be available except to a extremely small minority of users who know how to work around these limitations.

The Google Play Store is the YouTube of app developers

Being on the Play Store is starting to feel much like producing content for YouTube, where policy changes can be sudden and announced without much time in advance. On YouTube, producers always have to be on the lookout for what might get a video demonetized, on top of dealing with content claims, both actions promoted by entirely automated, opaque systems. On the Play Store, we need to be constantly looking out for other things that might suddenly get our app pulled or our developer account banned – together with the accounts of everyone who Google decides has anything to do with us:

And this is just a tiny sample, not even the “best of”, of the horrifying stories that are posted to r/androiddev, every other day. For each of these, there are dozens in the respective “categories”. Sometimes the same stories, or similar ones, also make the rounds in Hacker News. It seems Google is treating Play Store bans and app removals with the same or worse flippancy that online games ban players suspected of cheating. Playing online games isn’t the career of most people who do it, but Android app development is, which leads to the obvious question, what do people do when they are banned?

After writing this, I realize my YouTube analogy is terrible. You see, on YouTube generally one receives strikes, instead of waking up one day to suddenly see their account banned. YouTubers also have the opportunity to profit from the drama caused by the policy changes by “reacting” to them, for example. And while YouTubers typically have the sympathy of their viewers, app developers have to deal with user outrage – because users have no idea, or don’t care, about why we’re being forced to massively degrade the performance and features of our apps. For example, the developer of ACR, a popular call recorder, had to deal with bad app reviews, abuse and profanity among thousands of emails from outraged users after removing the call log permission, and this was after an extensive campaign warning users of the upcoming changes (as a user of ACR, I uninstalled the Play Store version and installed the “unchained” version, which keeps the call log features, through XDA Labs).

As a freelance developer or as a small company, developing for Android is riskier than ever. I can start working on an app idea today and it’s possible that in six months, when it is ready for the initial release, changes to the store policy will have rendered my app unpublishable or have severely affected its functionality… in addition to the aforementioned point about APIs deprecating and changing semantics, requiring constant upkeep of the code to keep up with the latest versions.

If you opened the links above, by now you have probably realized another thing: user support with actual humans is non-existent, and if only their bots were as responsive as Google Assistant… And, if they are not bots, then they are humans which only spit out canned responses, which is just as bad. It is widely known that the best method for getting problems solved with regards to Google Play listings, is to catch the attention of a Google employee on social media.

It seems the level of support Google gives you is correlated to how many people will read your rants about your problems with their platforms. And it’s an exponential correlation, because being big isn’t enough to get a moderate level of support; you must be giant. This is a recurring problem with most Google services, especially if you are not using G Suite (apparently, app developers do not count as “paying customers” when it comes to support). Of all the things I’d like the EU to regulate (and especially, to not regulate, but that’s a story for a different time), the obligation for these mega-corporations to provide actual user support is definitely one of them.

Going back to the probably flawed YouTube analogy, there’s one more parallel to draw: many people believe that in recent years, YouTube has been making changes to both policies, business models and the “algorithm”, that heavily favor the big, already-established creators and make it hard for smaller ones to ever be successful. I believe we are seeing a similar trend on the Google Play Store – just keep in mind you must not analyze an app’s popularity or “level of establishment” by the number of downloads or active users, but by how much profit it generates in ad revenue and IAP cuts.

“Android is open source”

“Android is open source” is the joke of the year – for the fifth consecutive year. While it is true that the Android Open Source Project (AOSP) is still a thing, many of the components that make Android recognizable and usable, both from an end user and developer’s perspective, are increasingly closed source.

Apps made by Google are able to do things third-party apps have trouble replicating, no doubt due to the tight-knit interaction between them and the proprietary behemoth that is Google Play Services. This is especially noticeable in the “Google” app itself, Google Assistant, and the Google launcher.

If you install an AOSP build, many things will be missing and many apps – my own ones included – will have trouble running. Projects looking to provide “de-googlified” versions of Android have developed extensive open source replacements for many of the functions provided by Google Play Services. The fact that these replacements had to be community-developed, and the fact that they are very much necessary to run the majority of the popular applications, show that nowadays, Android can be considered open source as much as in the sense that it can be considered a Linux distro.

AOSP, on its own, is effectively controlled by Google. The existence of AOSP is important, if nothing else, to define common APIs that the different “OEM flavors” of Android must support – ensuring, with minor caveats, that we can develop for Android and not for “Samsung’s Android” or “Nokia’s Android”. But what APIs come and what APIs go is completely decided by Google, and the same is true for the overall system architecture, security model, etc. This means Google can bend AOSP to their will, stripe it of features and move things into proprietary components as much as they want.

Speaking of OEMs and inter-device compatibility, it’s obvious that this push towards implementing important functionality in Google Play Services and making the whole operating system operate around Google’s components has to do with keeping the “OEM flavors” under control. A positive effect for users and developers is that features and security patches become available even on devices that don’t receive OEM updates, or only receive updates for the major Android version they came with, and therefore would never receive the new features in the latest major release. A negative effect is that said changes can affect even old Android versions overnight and completely at Google’s discretion, much like restrictions on what APIs and permissions apps on the Play Store are allowed to use.

Google’s guiding light when it comes to Android openness seems to gravitate towards only opening the Android source as much as necessary for OEMs to make it run on their devices. We are not at that extreme point – mainly because the biggest OEMs have enough leverage to prevent that from happening. I feel that at this point, if Google were able to make Android entirely closed source, they would do it. I wonder what future Fuschia holds for us in this regard.

So secure you can’t use it

The justifications for many of the changes in later Android versions and Google Play policies usually fall into one of two types: “security” and “user experience”, with the latter including “battery life”. I’m not sure for whom Google is designing their “user experience” in recent years, but it certainly isn’t for “proficient users” like me. Let’s, however, talk about security first.

Security should be proportionally strong to what it is protecting. With each major Android version, we see a bigger focus on security; for example, it’s becoming harder and harder to root a phone, short of installing a custom ROM that includes superuser functionality from the start. One might argue this is desirable, but then you notice security and privacy have also been used as the excuse to disallow the use of certain permissions like the call log and messaging access, or to remove APIs including the external storage one.

This increase in security strength makes sense: security is now stronger because we are also storing more valuable information in our phones, from “old-fashioned” personal information about us and our acquaintances, to biometric information like fingerprint, facial and retinal scans. Of course, and this is probably the part Google et al. are most worried about, we’re also storing entire payment systems, the keys for DRM castles, and so on.

Before finishing my point about security, let’s talk a bit about user experience. User experience is another popular excuse for making changes while limiting or altogether removing certain features. If something has to be particularly complicated (or even “insecure”) in order to support the use cases of 1% of the users, it often gets simplified… while the “particularly complicated” or “insecure” system is stripped entirely, leaving the aforementioned 1% with a system that no longer supports their use cases. This doesn’t sound too bad, right? However, if you repeat the process enough times, as Google is bound to do in order to keep releasing new versions of their software (so that their employees can get their bonuses), tying the hands of 1% of the users at a time, you are probably going to be left with something that lets you watch ads only… and probably Google ads at that, I guess. You didn’t need to make phone calls, right? After all, the person on the other side might be pulling a social engineering scheme on you, or something…

Strong security and good user experience are hard to combine together. It seems that permission prompts do not provide sufficient security nor acceptable user experience, because apparently it’s easier to remove the permissions altogether than to let users have a choice.

User choice is what all of this boils down to, really. Android used to give me the choice of being slightly insecure in exchange for having more powerful and innovative features in the apps I install, than in the competing mobile platforms. It used to give me the choice of running 10 apps in the background and having my battery last half a day as a result, but now, if I want to do so, I must deal with 10 ongoing notifications. I used to be able to share files among apps as I do on my desktop, but apparently that is an affront to good security too. I used to be able to log the Wi-Fi networks in my vicinity every minute, but in Android 9 even that was limited to a handful of scans per hour, killing some legitimate use cases including my master’s thesis project in the process. Fortunately, in academia we can just pretend the latest Android version is 8.

Smart cards, including SIM cards, were invented to containerize the secure portion of systems. Authentication, attestation, all that was meant to be done there, such that the bigger system could be less secure and more flexible. Some time in the last two decades, multiple entities decided it was best (maybe it provided “better user experience”?) that important security operations be moved into the application processor, including entire contactless payment systems. Things like SafetyNet were created. My argument in this section goes way beyond rooting, but if my phone is rooted and one of the apps to which I granted root permission steals my banking details, … apparently the banking app shouldn’t have been allowed to run in the first place? Imagine if the online banking of my bank refused to open on my desktop because it knows I know the password for the administrator account.

Still on the topic of security, by limiting what apps distributed on the Play Store are allowed to do and ending support for legitimate use cases, Google ends up encouraging side-loading (direct APK download and installation). This is undesirable from a security point of view, and I don’t think I need to explain why.

Our phones are definitely more secure now, but so much “security” is crippling the use cases of people who do more than binge-watch YouTube and their social network feeds. We should also keep in mind that many people are growing up with smartphones and tablets alone, and “just use your desktop for those advanced tasks” is therefore not an answer. It’s time for my retarded proposal of the week, but what about not storing so much security-sensitive stuff in our phones, so that we don’t need so much security, and thus can actually get back the flexibility and “security pitfalls” we had before? Android, please let me shoot myself in the foot like you used to.

Lack of realistic alternatives

This evolution of Android towards appealing to the masses (or appealing to Google’s definition of what the general public should be allowed to do) would not worry me so much if, as a user, I had a viable mobile OS alternative. On the Apple side, we have iOS, whose appeal from the start was to provide a “it just works”, secure platform, with limited flexibility but equally limited margin for error. Such a platform is actually a godsend for many people, who certainly make up the majority of users, I don’t doubt. Such a platform doesn’t work for me, because as I said, I need to be able to shoot myself in the foot if I want to: let me have 2 hours of battery life if I want, let my own apps spy on my location if I want.

This was fine for many years, because we had Android, which let us do this kind of stuff. It just so happens that because of AOSP, and because there were no other open source or licensable platforms with traction, Android ended up being the de-facto standard for every smartphone that isn’t an Apple one. On the low-end, Android is effectively the only option. Of course, this led to Android having the larger market share. Since “everyone” uses it now, there’s pressure to copy the iOS model of “it just works” and “safe for people with self-harm tendencies” – you can’t hurt yourself even if you wanted.

Efforts to introduce an Android competitor have been laughable, at best. Windows Phone/Windows Mobile failed in part because of a weak and possibly too late entry, combined with a dubious “vision” and bad management decisions on Microsoft’s part. In the end, what Microsoft had was actually good – if there weren’t the case, there wouldn’t be still plenty of die-hard WP/WM fans – but getting there so late (and with so many mixed signals about the future of the platform) means developers were never sufficiently captivated, and without the top 100 apps in there, users won’t find the platform any good, no matter how excellent it is from a technical standpoint. Obviously, it does not help that a significant number of those “top 100 apps” are Google properties; in fact, the only reason Google has their apps on iOS is because, well, iOS was there already when they arrived on the scene.

If even a big player with stupid deep pockets like Microsoft can’t introduce a third mobile platform, the result of smaller-scale attempts like Firefox OS is quite predictable. These smaller attempts have an additional problem, which is finding hardware to run on. It doesn’t help that you can’t change the OS on a phone the same way you can on a PC. In fact, in the long gone year of 2015, I was already ranting about the lack of standardization in smartphone hardware. It’s actually fun to go back at that post, made when Android 4.4 was the latest version, and see how my perception of Android has changed.

I should also note that if a successful Android alternative appears, it will definitely run Android apps, probably through a compatibility layer. In a way, Android set the standard for apps much in the same way that 15 years ago, IE6 was setting web standards in the worst way possible. Did someone say antitrust?

Final thoughts

Android, and therefore Google, set the standard – and the implementation – for what we can and can’t do with a smartphone, except when Apple introduces a major innovation that OEMs and Google are compelled to quickly implement in Android. These days, it seems Apple is stalling a bit in innovation in the smartphone front, so Google is taking the opportunity to “innovate” by making Android more similar to iOS, turning it into a cushioned, limited, kid-safe operating system that ties the hands of developers and proficient users.

Simultaneously, Google is solving the problem of excessive shovelware and even a bit of malware on the Play Store, by adding more automation, being even less open about their actions, and being as deaf as ever. Because it’s hard to understand whether apps are using certain permissions legitimately or not – and because no user shall be trusted to decide that by themselves – useful applications, from call recording tools, to automation, to literally any app that might want to open arbitrary files in the user storage, are being “made impossible” by the deprecation and removal of said permissions and APIs.

We desperately need an Android alternative, but the question of who will develop, use and target said alternative remains unanswered. What I know, is that I no longer feel happy as an Android developer, I no longer feel happy as an Android user, and I’m not likely at all to recommend Android to my friends and family.

Edited at 2:56 March 28th UTC to add clarification about Android clipboard access.

See the discussion for this article on Hacker News, r/AndroidDev, r/Android

I really like Discord. It’s a monster, it scares me

…and it’s also the next Steam.

Dear regular readers: we all know I’m not a regular writer, and you were probably expecting this to be the second post on the series about internet forums in 2018. That post is more than due by now – at this rate it won’t be finished by the end of the year – even though the series purposefully never had any announced schedule. I apologize for the delay, but bear with me: this post is not completely unrelated to the subject of that series.

Discord, in case you didn’t know, is free and proprietary instant messaging software with support for text, voice and video communication – or as they put it, “All-in-one voice and text chat for gamers that’s free, secure, and works on both your desktop and phone.” Launched in 2015, it has become very popular among gamers indeed – even though the service is definitely usable and useful for purposes very distant from gaming, and to people who don’t even play games. In May, as it turned three years old, the service had 130 million registered users, but this figure is certainly out of date, as Discord earns over 6 million new users per month.

If you have ever used Slack, Discord is similar, but free, easier to set up by random people, and designed to cater to everyone, not just businesses and open source projects. If you have ever used Skype, Discord is similar, but generally works better: the calls have much better quality (to the point where users’ microphones are actually the limiting factor), it uses less system resources than modern Skype clients on most platforms, and its UI, stability and reliability doesn’t get worse every month as Microsoft decides to ruin Skype some more. You can have direct conversations with other people or in a group, but Discord also has the concept of “servers”, which are usually dedicated to a game, community or topic, and have multiple “channels” – just like IRC and Slack channels – for organizing conversations and users into different topics. (Beware that despite the “server” name, Discord servers can not be self-hosted; in technical documents, servers are called “guilds”).

Example of Slack bot in action. Image credit: Robin Help Center

Example of Slack bot in action. Image credit: Robin Help Center

Much like in Slack (and, more recently, Skype, I believe), bots are first-class citizens, although they are perhaps not as central to the experience as in many Slack communities. In Discord, bots appear as any other user, but with a clearly visible “bot” tag, and they can send and receive messages like any other user, participate in text in voice chats, perform administrative/moderation tasks if given permission… to sum it up, the only limit is how much code is behind each bot.

Example of Discord bot in action

Example of Discord bot in action. Discord bots can also join voice channels, e.g. to play music.

I was introduced to Discord by a friend in the end of 2016. We were previously using Skype, and Discord was – even at the time – already clearly superior for our use cases. I found the “for gamers” aspect of it extremely cheesy, so much that for a while it put me off of using it as a Skype replacement. (At the time, we were using Skype to coordinate school work and talk about random stuff, and at the time, I really wasn’t a “gamer”, on PC or any other platform). I finally caved in, to the point where I don’t even have Skype start with my computers anymore, and the Android app stays untouched for weeks – I only open it to talk to the two or three people who, despite heavy encouraging, didn’t switch to Discord. It’s no longer the case, but the only thing Discord didn’t have back then was screen sharing, but it was so good that we kept using it and went with makeshift solutions for screen sharing.

As time went by, I would go on to advocate for the use of Discord, join multiple servers, create my own ones and even build a customized Discord bot for use in the UnderLX Discord server. Discord is pleasant to use, despite the fact that it tends to send duplicate messages under specific terrible network conditions – the issue is more prominent when using it on mobile, at least on Android, over mobile data.

Those who have been following what I say on the internet for longer, might be surprised that I ended up using and advocating for the use of a proprietary chat solution. After posts such as this one where I look for a “free, privacy friendly” IM/VoIP solution, or the multiple random forum posts where I complain that all existing solutions are either proprietary and don’t preserve privacy/prevent data collection, or are “for neckbeards” for being unreliable or hard to set up, seeing me talk enthusiastically about Discord might make some heads spin.

I suppose this apparent change of heart is fueled by the same reason why many people, myself included, use the extremely popular digital store, DRM platform (and wannabe Discord competitor… a topic for later) Steam: convenience. It’s convenient to use the same store, launcher and license enforcer for all games and software; similarly, it’s convenient to use the same software to talk to everyone, across all platforms, conversation modes, and topics. It’s an exchange of freedom and privacy for convenience.

Surprise, surprise: it turns out that making a free-as-in-freedom, libre if you prefer, platform for instant messaging that provides the desired privacy and security properties, in addition to all the features most people have come to expect from modern non-free platforms like Facebook Chat or Skype, while being as easy to use as them, is very difficult. Using the existing popular platforms does not involve setting up servers, sharing IP addresses among your contacts, dealing with DDoS attacks against those servers or the contacts themselves, etc. and for an alternative platform to succeed, it must have all that, and ideally be prepared to deal with the friction of getting everyone and their contacts to use a different platform. It was already difficult in 2013 when I wrote that post, and the number of hard-to-decentralize features in the modern chat experience didn’t stop growing in these five years. The technology giants are not interested in developing such a platform, and independent projects such as Matrix.org are quite promising but still far from being “there”. And so everyone turns to whatever everyone else is using.

In my opinion, Discord happened to be the best of the currently available, viable solutions that all my friends could actually use. It is, or was, a company and a product focused on providing a chat solution that’s independent from other products or larger companies, unlike Messenger, Hangouts or Skype, which come with all the baggage from Facebook, Google and Microsoft respectively. Discord, despite having the Nitro subscription option that adds a few non-essential features here and there, is basically free to use, without usage limits – unlike Slack, which targets company use and charges by the user.

List of Discord Nitro Perks in the current stable version of Discord.

List of Discord Nitro Perks in the current stable version of Discord. Discord is free to use, but users can pay $4.99/month or ten times that per year to get access to these features.

What about sustainability, what is Discord’s business model? To me it was painfully obvious that Nitro subscriptions couldn’t make up for all the expenses. Could they just be burning through VC money only to die later? Even by selling users’ data, it wasn’t immediately obvious to me that the service would be sustainable on its own. But I never thought too much about this, because Discord is super-convenient, and alternative popular solutions run their own data collection too, so I just shrug and move on. If Discord eventually ran out of money, oh well, we’d find an alternative later.

Back to praising the product, Discord is cross-platform, with a consistent experience across all platforms, and can be used in both personal/informal contexts and work/formal contexts. In fact, Discord was initially promoted to Reddit communities as a way to replace their inconvenient IRC servers, and not all of those communities were related to gaming. If only it didn’t scream “for gamers” all over the place…

I initially dismissed this insistent targeting of the “gamers” market as just a way to continue the segmentation that already existed… after all, before Discord there was TeamSpeak, which was already aimed at gamers and indeed primarily used by them. By continuing to target and cater to this very big niche, Discord avoided competing head-to-head with established players in the general instant messaging panorama, like the aforementioned Skype, Facebook Messenger and Hangouts, and also against more mobile-centric solutions like WhatsApp or Telegram.

I believed that at some point, Discord would either gradually drop the “chat for gamers” moniker, or introduce a separate, enterprise-oriented service, perhaps with a self-hosting option, although Slack has taught us that isn’t necessary for a product to succeed in the enterprise space. This would be their true money-maker – after all, don’t they say the big money is on the enterprise side of things? Every now and then I joked, half-seriously, “when are they going to introduce Discord for Business?”

I was half-serious because my experience using Discord, a supposedly gaming-oriented product, for all things non-gaming, like coordinating an open source project or working remotely with my colleagues, was superb, better than what I had experienced in my admittedly brief contact with Slack, or the multiple years throughout which I used Skype and IRC for such things. The “for gamers” aspect was really a stain in what is otherwise a product perfectly usable in formal contexts for things that have nothing to do with playing games, and in some situations stopped me from providing my Discord ID and suggesting Discord as the best way to contact me over the internet for all the things email doesn’t do.

These last few days, Discord did something that solved the puzzle for me, and made their apparent endgame much more clear. It turns out their focus on gaming wasn’t just because the company behind Discord was initially a game development studio that had pivoted into online chat, or because it was a no-frills alternative to TeamSpeak (and did so much more), nor because it was an easy market to get into, with typically “flexible” users that know their way around installing software, are often eager to try new things, use any platform their parents are not on, and share the things they like with other players and their friends. I mean, all of these could certainly have been factors, but I think there’s a bigger thing: it turns out Discord is out to eat Steam’s (Valve’s) lunch. Don’t believe me? Read their blog post introducing the Discord Store.

In hindsight, it’s relatively obvious this was coming, in fact, I believe this was the plan all along. It’s a move so genius it must have been planned all along. Earn the goodwill of the gamer community, get millions of gamers who just want a chat client that’s better than what Steam and Skype provide while being as universal as those among the people they want to talk to (i.e. gamers), and when the time is right, become a game store which just happens to have the millions of potential clients already in it. It’s like organizing a really good bikers convention, becoming famous for being a really good bikers convention, and then during one year’s edition, ta-da! It’s also a dealership!

The most interesting part about all this, in my opinion, is that Discord and Steam’s histories are, in a way, symmetrical. Steam, launched in 2003, was created by Valve – initially a game development company – as a client for their games. Steam would evolve to be what’s certainly the world’s most recognizable and popular cross-platform software store and software licensing platform, with over 150 million users nowadays (and this number might be off by over 30 million). As part of this evolution, Steam got an instant messaging service, so users could chat with their friends, even in-game through the Steam overlay. After a decade without major changes, a revamped version of the Steam chat was recently released, and it’s impossible not to draw comparisons with Discord.

The recently introduced Steam Chat UI

The recently introduced Steam Chat UI. Sure, it’s much nicer, and you can and should draw comparisons, but it’s no Discord… yet.

I had the opinion that Steam could ditch its chat component altogether and just focus on being great at everything else they do (something many people argue they haven’t been doing lately), and I wasn’t the only one thinking this. We could just use Discord, whose focus was being a great chat software, and Steam could focus on being a great store. But now, I completely understand what Valve has done, and perhaps their major failure I can point out right now was simply taking too long to draft a reply. Because, on the other, “symmetrical” side of the story…

Discord was developed by Hammer & Chisel, recently renamed Discord Inc., a game development studio founded in 2012, which only released one unsuccessful game before pivoting into what they do now – which used to be developing an instant messaging platform, but apparently now includes developing an online game store too. Discord, chat software that got a store; Steam, a store that got chat functionality, both developed by companies that are or once were into game development. Sadly, before focusing on the game store part of things, Discord, Inc. seems to have skipped the part where they would publish great games, their sequels, and stop as they leave everyone asking for the third iteration.

It is my belief that it was not too long after Discord became extremely successful – which, in my opinion, was some time in 2016 – and a huge amount of gamers got on it, that they set their eyes on becoming the next Steam. It’s not just gamers they are trying to cater to, as they started working with game developers to build stuff like Rich Presence long ago, not to mention their developer portal was always something focused not just on Discord bots, but applications that authenticate against Discord and generally interact with it. This certainly helped open communication channels with some game developers, which may prove useful to get games on their store.

Discord is possibly trying to eat some more lunches besides Valve’s, too. Discord Nitro (their subscription-based paid tier, which adds extra features such as the ability to use custom emoji across all servers or upload larger files in conversations) has always seemed to me as a poor value proposition, but I obviously know this is not the universal opinion, as I have seen multiple Nitro subscribers. Maybe it’s just that I don’t have enough disposable income; anyway, Nitro just became more interesting, as now “It’s kinda like Netflix for games.” From what I understand, it’ll work a bit like Humble Monthly, but it isn’t yet completely clear to me whether the games are yours to keep – like on Humble Monthly – or if it’s more like an “extended free weekend” where Nitro users get to play some games for free while they are in rotation. (Update: free games with Discord Nitro will not be permanent)

This Discord pivot also presents other unexpected ramifications. As you might now, on many networks all game-related stuff (like Steam) is blocked, even though instant messaging and social networks are often not blocked as they are used to communicate with clients, suppliers, or even between co-workers, as is the case with Slack. I fear that by introducing a store, Discord will fall even more into the “games” bucket, and once it definitively earns the perception of being a games-only thing, it’ll be blocked in many work and school networks, complicating its use for activities besides gaming. The positive side of things is that if they decide launching that enterprise version, this is an effective way of forcing businesses to use it instead of the free version, as the “general populace” version will be too tightly intertwined with the activity of playing games.

I’ll be honest… things are not playing out the way I wish they would. Discord scares me because now I feel tricked and who knows what other tricks they have up their sleeve. I would rather have an awesome chat and an awesome store, provided separately, or alternatively, an awesome chat and store, all-in-one. (And if the Discord team reads this, they’ll certainly say “but we’re going to be the awesome chat and store, all-in-one!”) But at this rate, we’ll have two competing store-and-chat-platforms… because we didn’t have enough stores/game clients or instant messengers, right?

Fortunately, the charging one has been solved now that we've all standardized on mini-USB. Or is it micro-USB? Shit.

Because of course this one had to be here, right? I could also have added a screenshot of Google’s IM apps, but I couldn’t bother finding screenshots of all of them, let alone installing them.

You can of course say, “just pick one side and your life will be simpler”, but we all know this won’t be the case. Steam chat is a long way from being as good as Discord, and the Discord store will certainly take its time to be a serious Steam competitor. Steam chat will never sound quite right for many of Discord’s non-officially assumed use cases; for example, even if Steam copies all of Discord’s features and adds the concept of servers/guilds, it’ll never sound quite right to have the UnderLX server on Steam, will it? (Well… unless maybe UnderLX pivots into something else as well, I guess). Similarly, I’ll be harder to “sell” Discord’s non-gaming use cases by telling people to ignore the “for gamers” part, as I’ve been doing, if Discord is blatantly a game store and game launcher.

Of course I’ll keep using Discord, but I’ll probably not recommend it as much now, and of course I’ll keep using Steam, and mostly ignoring its chat capabilities – even because most people I talk to are not in there, and most of those that are, are also on Discord. But for now, I’ll keep the games tab on Discord disabled, and I seriously hope they’ll keep providing an option to disable all the store/launcher stuff… so I can keep hiding the monster under the bed.

Internet forums in 2018: are they really dying?

In the introductory post to this series about the state of internet forums, I mentioned that, to me at least, forums felt like a relic of the past, a medium many internet users will never experience, and that many forums were seeing a downwards trend in the amount of activity in the past few years. But is this just a personal feeling supported by anecdotes, or is this really a general situation?

In the previous post, I also said that these posts would be subjective texts posted to a personal blog, not scientific studies. However, I thought it would be interesting to use these posts to do some introspection and try to understand where this feeling that “forums are dying” comes from. After all, it might just be that I and my circle of friends are abandoning forums, and (in a possibly correlated way) the forums we used to frequent are also dying, while the big picture is quite different. There’s also the possibility that this trend is limited to certain cultures, regions of the world, or specific forum topics/themes.

To do this “introspection”, I’ll be going through some of the forums I know about, and maybe even some I don’t know about, to see how they are doing. I have good news: you can completely skip this lengthy post and you probably will still enjoy the rest of the series. Yeah, this series will still be primarily anecdote-based, but at least we will have looked at a slightly larger number of anecdotes – and we will have taken a deeper look at them. Yay for small sample sizes. Let’s start with the ones I know about.


This forum was founded in April of 2010, and it is the first forum I remember being a part of, although I know for sure I signed up for other forums before that one – also related to web hosting, I just don’t remember their name anymore, and I’m sure that they have not been online for years now. I was one of the first 20 members of that forum, and I ended up being a quite relevant member, because I was a moderator there for over a year, between 2011 and 2012 (or even the start of 2013). I actually got through some “drama” together with the rest of the team, involving forum ownership/administration changes.

I put “drama” in quotes, because to this day I’m still not sure whether the things we were seeing as extremely serious threats were actually that serious… keep in mind that, at the time at least, most of the staff was under 20, perhaps even under 18. I know that at some point legal paperwork was flying around against us, someone wanted to trademark FreeVPS and take the forums from us, or something like that. Now I find all of this a bit cringe-worthy, but I learned quite a lot of stuff (from systems administration, to project management, time management and other soft-skills). Anyway, let’s set the nostalgia aside…

This was actually the first forum I “abandoned”, in the sense I stopped going there as frequently or participating there as much as I initially did. And thus it was only while I was “researching” for this post that I found out that this community, too, is in trouble. The activity levels are way, way down than back in the “golden days”, with what seems to be an average of five posts per day – and sometimes a day goes by without a post. Back in those “golden days”, there would be over 100 posts per day, so much that the statistics page of the forum still indicates an all-time average of 73 posts/day.


This one is not dead, it’s just… trying to find a way to evolve, maybe reinvent itself a bit, in an attempt to bring back the glory of the earlier days. Founded in 2000 – just barely after I learned the basics of how to read Portuguese (let alone write English) – Cemetech (pronounced KE’me’tek) is a community focused on graphing calculators and, to a lesser degree, DIY electronics, and other… nerd stuff. At least, this is how I saw it back when I joined in November 2011. To be fair, of all things this community had to offer and discuss, I really only ever cared about Casio Prizm discussions. After I moved on from Casio Prizm development – and at a time when forum activity was already decreasing – I tried to foment discussions on my Clouttery project, with mild but disappointing success.

Cemetech is a bit of a strange beast because, at least in my view, it hinges a bit too much on the interests, activities and projects of its founding member KermM and other staff, which to me at least, always seemed to be close friends with each other. For one example, I felt like a lot of attention was given to Casio calculators and especially the Prizm models at a time when KermM was “fed up” with TI for some reason, but once TI started releasing new models, with color screens and other features debuted by the Prizm, Casio calculators were slowly forgotten in that community. I’m sure they didn’t do this on purpose – for many years, until the Prizm was launched and caught the eyes of the staff, the community was much more TI-focused than Casio-focused, and it wasn’t unexpected to see it return a bit to its roots. Perhaps I should rephrase my sentence where I said it hinges too much on the staff’s interests… what I mean is, the staff there used to be a discussion promoter, posting a lot on every thread; since KermM kind of left to pursue his PhD and then to be a founder of a startup, Geopipe, activity levels took a hit – at least, and again, from my point of view.

This is a quite anecdotal example… not only is the community this “strange beast”, it’s also focused on the quite niche topic of graphing calculators – even though it always had so many more topics, it was calculator stuff that brought in the most people and the most posts. As a hacker’s platform, graphing calculators are dying, due to new exam regulations in multiple countries imposing restrictions on their hardware and software, and sometimes doing away with this kind of calculators entirely.

It isn’t surprising that a forum about a dying niche subject would be dying. Cemetech now seems down to about 40 posts per day… wait, that doesn’t seem very low. But if you look at the forum index alone, you’ll see that many sections have not even seen a new post in this month, and the number of topics with active discussion seems much lower to me than it once was. Oh well, maybe it’s just me – this is a subjective blog post, after all.


Yet another community focused on graphing calculators – at least initially. This one is much more recent than most of the communities around this topic, but it too has been struggling with lack of activity. In my opinion, it never had that much activity to begin with, but there’s no doubt it went through some rough months – fortunately, it seems to be speeding up a bit now.

CodeWalrus was founded in October 2014… and I’m not sure how to explain this, and I’ll probably get it wrong, but it was founded by people previously associated with Omnimaga (yet another calculator community) – including its founder – but who no longer wanted to be involved in Omnimaga. This older community is, too, pretty much inactive these days (see stats). As for CodeWalrus, which also has a stats page, things look way brighter. At least, you certainly can not accuse the members of not trying to cheer things up.

Before I move on to talking about another forum, note how I said that CodeWalrus was initially more focused on graphing calculators (by virtue of the interests of the members, not because that was the topic imposed by the administration). Well, their strategy – from the very first day – for catering to people with other interests seems to have paid off. A brief look at the active topic lists shows no posts related to graphing calculators… wow.

XDA Developers

I don’t think this community needs to be introduced to my readers. It’s a giant community and I can’t quite take its pulse, unlike what I did with the other anecdotes in this post. With over eight million members, its scale is completely different from the other forums I mentioned so far. It isn’t a dying forum, and that’s why I brought it here: to show that definitely not all forums are dying. Or maybe only big forums survive? We shall look into that in future posts.

This forum might not be dying, but it could still be interesting to see whether the variation in number of posts and sign-ups is positive or negative. I searched, and searched, and could not find a live statistics page, or any up-to-date report. With a forum so big, it’s quite possible that computing these kinds of statistics in a real-time fashion is simply unfeasible. However, certain forum views still show the current totals at the bottom. With the help of Internet Archive’s Wayback Machine, we can plot stuff over time…

Take from that what you will… but its growth doesn’t seem to be decelerating, even if it apparently isn’t as active as it was in 2012-2013. It could also have happened that despite the slight decrease in the speed at which new posts are added, the quality of the discussion has improved, with an overall better result.


Still on the big forums league, SkyscraperCity – a community with about one million members and over 100 million posts that claims to be the world’s biggest community on “skyscrapers and everything in between”, founded in 2002. In practice, it’s a forum about urbanism that, along many international discussions, also hosts some regional sub-forums where all kinds of stuff is discussed. For example, in the Portuguese forum, you can find topics ranging from architecture and urbanism to transportation and infrastructures, and also general topics about what’s going on in the country as well as completely off-topic stuff like discussion about what’s on TV. It is at SkyscraperCity that you can find the most forum-based discussion around the Lisbon subway, for example – with over 3000 posts per year about that subject alone.

I only participate, precisely, in the Lisbon subway discussions at that forum, so even though I vaguely know about the other sub-forums and topics, I have no idea if the forum is more or less active than it used to be. Using, again, the Wayback Machine, let’s take a look at the evolution in the number of members and total posts in the last 10 years.

This graph is even less interesting than the XDA one. The number of members and posts has been growing essentially linearly. There is a slight deceleration in the last three to four years, especially in the member count, but that could be due to better spammer detection systems (something as simple as a better captcha system could have that effect).

SkyscraperCity is yet another forum that, despite using antiquated software and not having the best availability or reliability history, doesn’t seem to be going anywhere. It isn’t displaying “exponential user growth” like investors and shareholders like so much to hear. But hey, one of the nice things about forums, is that generally they don’t need to boost user counts like that, because they don’t typically have shareholders to report nice numbers to.


Looks like we’re back to the subject of web hosting. LowEndTalk is the discussion forum of LowEndBox, a website that lists deals on low-spec virtual and dedicated private servers. I never participated or followed this forum in any way, but I have known about its existence and have had a good idea of its “dimension” for almost as long as I have known about FreeVPS.

LowEndTalk makes my job a bit hard because, as far as I can see, they don’t list the total number of members nor the total number of posts anywhere. Fortunately, they do show the total number of threads (or “discussions”, as their forum software calls them), rounded to the nearest hundred. Let’s see if we can get any sort of trend out of this. Wayback Machine to the rescue…

Judging by the thread counts alone, looks like LowEndTalk is yet another forum that isn’t going anywhere, with over 7000 new threads being added each year. The chart gives the impression that growth is slightly more linear than it actually is: the rise in the number of threads is actually slowing down a bit, from an average of about 8700 threads/year in 2013-2015 to about 7500 threads/year from 2015 to the present. Nothing to worry about.

You might be wondering why I brought LowEndTalk here, since it’s not an especially big or especially well-known forum, and it’s not a forum I frequent, either. The reason is that during my “research” for this post, a few things caught my eyes when looking at LowEndTalk.

For a start, their forum software (Vanilla Forums) is not one I commonly see in the wild, or at least one that I recognize – despite the fact that according to Vanilla Forums, they are “Used by many of the world’s leading brands”. At least in its LowEndTalk incarnation, it is extremely “clean”, simple and still good looking. It’s not “responsive” web design, but it’s very readable and fast. It’s definitely different from your run-of-the-mill SMF/phpBB/MyBB-powered forum. But is it better? We shall look into this in a future post.

LowEndTalk only became a “traditional forum” in 2011. Before, they were a Q&A website (like StackOverflow, for example) powered by OSQA. Using the Wayback Machine, it’s easy to see that some of the hot topcis were discussions about moving from OSQA to something more appropriate to their needs. Therefore, this community has yet another interesting quirk: they were not born as a “traditional forum”, they actually moved into one after the community was already bootstrapped and a Q&A model was deemed inappropriate.

LowEndTalk have recently set up a Discord “server”. Unlike what seemed to be the plan at CodeWalrus at one point, it is apparently not their intention to move discussion out of the forum and into Discord. Still, I think this is an interesting point for a future post about forum alternatives – what are people moving to, after all?

Rockbox Forums

Including this one here is pretty ridiculous, but I thought I’d do so anyway, even if only as some sort of honorable mention to the thousands of small forums hosting communities of developers and users of open source projects. I feel that these kinds of forums, along with self-hosted issue trackers (such as Flyspray or Trac) were a much more common sighting in the distant past before GitHub.

Anyway, what is Rockbox and their forums? Rockbox is an open-source firmware for that ancient thing smartphones made us forget about, the MP3 player. You know, of the old-school iPod-with-clickwheel kind.

I used Rockbox for a couple years on the 2nd iPod Nano my parents won in some sort of raffle and which they never really heavily used. iTunes made it a PITA to use it; my family (unlike me) isn’t that much into listening to music and maintaining a music library; and finally, my dad was into PDAs and Pocket PCs way before the iPhone even launched.

For us, the thought of playing music on a MP3 player was already kind of strange even when that iPod was brand new – why go through the hassle of using iTunes, transcoding music files, having to charge and carry around yet another device… when we could just take a full-size SD card, insert it into one of these ginormous Pocket PCs (which also made phone calls) and listen to our existing MP3s and WMAs (no M4A transcoding required!) using Windows-freaking-Media Player on Windows Mobile (or any other player of our liking, since those phones could run arbitrary EXEs compiled for Windows Mobile, and many player application alternatives existed).

Anyway, to finish telling yet another personal story of my life: I found out about Rockbox at a point when the iPod was already forgotten in a drawer. Long story short, that iPod got 500% more use ever since I installed Rockbox on it. Rockbox can even run Doom and has a Gameboy/Gameboy Color emulator, how awesome is that? (And no, Rockbox is not Linux-powered, but there was a separate project which ported Linux to some older iPods.) Sadly the capacitors on my iPod’s screen have failed and only 30% of the screen is readable now, making it a bit hard to use the device.

I would still use that iPod to this day if it weren’t for that malfunction (which I don’t feel like spending $10 on a new screen to fix): the iPod has just 4 GB of storage, but Rockbox can play Opus, the awesome audio format that’s simply the best in terms of quality/filesize ratio (and I hope this sentence will look extremely dated in 10 years). I can fit the relevant part of my music library in there by converting over 10 GB of high-quality MP3, M4A and FLAC into less than 4 GB of Opus at 96 kbit/s – which sounds great.

Rockbox, as you might guess, is pretty much dead these days. It never got to the point where it supported the most recent MP3 players or the latest iPod models, thanks to Apple and their locked bootloaders. MP3 players became a niche/audiophile product as people moved on to smartphones and the demand for them dropped; the prices went up as a result. Perhaps most importantly, if it weren’t for the fact that it could support more devices, Rockbox is a finished product: it can play any relevant music format (including tracker music), it has everything you could possibly want from a music player in terms of sound effects/adjustments, playlist control and library browsing. And now for something that might take you off-balance: Rockbox is a project by Haxx – yes, the same Haxx of the extremely popular and successful program curl! In fact, there is a noticeable overlap between both development teams. Both are awesome pieces of software.

I just realized this post was supposed to be about “forums” and not “alternative firmware for embedded devices and opinions about sound formats”… shit. Anyway, Rockbox is dead and, surprise! the forums are pretty much dead too, for obvious reasons – it would be quite interesting if the forums outlived the open source project they were built around, but it doesn’t seem like this will be the case, and I never saw it happen.


Maybe forums aren’t dying left and right like I thought they were. But they don’t seem to be growing as fast as the rest of the internet. IDK, at this point I’m just making stuff up because I don’t know how fast “the internet” grows. Perhaps it has to do with that shareholder thing, forums don’t usually have to report inflated numbers to anyone. Or maybe people are really locked down inside the bubble-decorated walled-garden of the EVIL ZUCC. Yeah, let’s pretend that is the case, so that my plans for the following posts are not completely foiled. As for this pointless anecdote-based post, its end is here.

BTW: did you know I started working on this post on the 7th of March… only to leave it rotting unfinished since the 9th of March… and finally finish it today? But it’s fine: this way, I could include a ZUCC reference while people are still outraged at Facebook, and before they go back to using it again happily ever after.