How do you make a sequel to the highest-grossing entertainment product of all time, which is also the second most sold video game after Minecraft? A yet unreleased sequel so impactful in online culture, that its trailers break YouTube viewership records and get recreated by fans, not just in dozens of other games, but also in live action, with the involvement of brands and personalities which would normally be disconnected from the games industry. A sequel that, years away from its release, already had multiple podcasts dedicated specifically to its discussion; a successor which is almost guaranteed to break sales records yet again, even if it were to somehow get a worse critical reception than its predecessors.
This is the question that thousands of people over at Rockstar Games have been working to answer for over a decade; an answer which millions of fans have been waiting to examine for nearly as long, while desperately trying to decipher every hint of Rockstar’s progress on Grand Theft Auto VI. It would be extremely unlikely for the final product not to be critically acclaimed, but it being unanimously perceived as nearly perfect would be equally improbable: no piece of media with such a large and diverse audience can possibly fully align with the preferences of every single fan. When the dust settles, every single one of us shall have remarks to note in the scorecard, while still recognizing how much of a masterpiece it will be.
I find the anticipation around certain high-profile game releases, particularly this one, to be positively refreshing. After all, video games are software, but such anticipation presents a stark contrast to the way many current software innovations appear to be pushed down users’ throats – an increasingly common feeling as more software is delivered as continuously evolving services, rather than individually acquirable major releases. This level of widespread anticipation doesn’t really occur for any other type of software anymore, and hasn’t for a long time: even limiting ourselves to the consumer space and going back a decade, if a new Office, Gmail or Instagram redesign had been delayed for a year, there would be hardly any uproar. There were no “we got [thing] before Windows 11” memes, and anything AI has too many socioeconomic implications to be consensual for the foreseeable future.
The fact that plenty of people still look forward to new games, even as they seem uninterested in most other software developments, is unsurprising: games are entertainment and not tools (usually); people are free to choose what games they play, and don’t generally feel that a choice was forced upon them, so it is natural that they will spend some of their time making such choices, and being excited about upcoming options that they think will suit them. Then, considering that there isn’t a lack of new games being made, the fact that millions of people choose to be interested in GTA VI is proof of the great work that Rockstar Games has done over the years – as is the fact that some of them seem to mainly be interested in loudly complaining that it won’t be to their liking, as if a) they were entitled to having an upcoming GTA title that’s to their liking, and b) they already knew that, tragically, it won’t suit their preferences.
For your convenience, the rest of this opinion piece is divided into sections with titles that attempt to be clever:
Living up to the full expectations and preferences of millions of players is impossible. Grand Theft Auto V certainly didn’t: some players complained about how the game was missing features and mechanics from prior titles (notably, from Grand Theft Auto: San Andreas); others complained that the story felt weak, missing gravitas, with “plastic” and over-the-top characters that didn’t feel particularly likeable, or with underwhelming endings; others criticized how the world did not offer many structured activities once players are done with the story and the secondary missions, making it a bit less appealing than the worlds of previous games. The mission design also received criticisms for being restrictive, limiting player creativity; this NakeyJakey video includes insightful criticism of such aspects.
A likely larger set of players was left satisfied with the quality of the single-player content, but not its quantity: some wanted an even longer story, others wanted expansions in the style of those made for Grand Theft Auto IV; largely thanks to the theft of GTA V’s source code, today it is known that the initial vision for the game encompassed such single-player DLCs, as well as additional missions and mechanics that, at best, were possibly adapted into GTA Online content, years later.
None of the criticism prevented GTA V from breaking a handful of sales records within the first week of release back in 2013, later becoming the second most sold game of all time, after Minecraft – whose first barebones release had happened three years prior. Reports indicate that, by now, GTA V has generated close to $10 billion in revenue. Yet you won’t have to scour very deep to find good numbers of people saying they preferred one of the earlier GTA titles – whose revenue, combined, didn’t surpass that of GTA V alone.
Great games sell well, but the individual preferences of the player base are hardly fully aligned with revenue numbers. Fortunately, people don’t just play the games they think are perfect. A vast set of mostly satisfied players is definitely more profitable than an extremely satisfied, but smaller, cult following. Will GTA VI be able to appease a record number of people without irreparably disappointing the most devout fans of the series?
I doubt anyone who is looking forward to VI’s release wants to acknowledge this now, but despite the massive budget and development time that went into it, it’s certain that once the dust settles, a good number of GTA fans will continue to prefer the earlier titles, eventually just because of nostalgia. The earlier games have always had fewer fans (for a start, they were released to a smaller, less developed market), but those who revisit these older titles regularly, are the ones who really like them.
Regarding fan nostalgia, the appearance of a PlayStation-like console in the second GTA VI trailer led to some of the wilder speculation going around, according to which the 2002 game Grand Theft Auto: Vice City would be playable within GTA VI, to some extent. In my opinion, this is technically feasible, and would be a fun way for Rockstar to cheekily satisfy those nostalgic players. It is however something which might not have a good reason to exist in the game, besides being a cool technology demo and a novelty. I don’t think it is very high in anyone’s wishlist.
Speaking of speculation, that’s exactly what I am partaking in. I think I’ve made mostly agreeable, low-risk predictions so far – perhaps even disappointingly so – but from reading the title alone, I am sure that many would be quick to trash me in the comments, claiming I hate the game, or Rockstar, or something along those lines, just by alluding to the quite basic notion that a piece of art can’t possibly be perfect to everyone. Fortunately, this is not one of those common social platforms, this is the great innovation called the “personal blog,” one where I encourage such individuals who don’t know what nuance is, to either leave their thoughts in their head or to publish them elsewhere, preferably some place where I won’t be notified about them.
It’s never too late to speculate
I am not truly qualified to speculate on what GTA VI will or won’t be, but then again, who is? Most of the fans speculating, even the most respected social media personalities in the space, don’t have game development experience. More interestingly, I would say that game developers, even those with experience in the very genre of open-world action-adventure titles, and even former Rockstar employees from decades past, likely don’t have enough experience to accurately comment on GTA VI development matters.
We are talking about a game whose budget is the highest of all time, with a development timeline that’s by far the longest of all the games in the series, a level of public anticipation that beats all games except perhaps juggernaut series like Half-Life, and the naturally high ambition that comes with making a successor to the highest-grossing entertainment product of all time. It’s also being developed by what may be the most secretive company in its space. All combined, this is a really unique circumstance. It is safe to say that the only people who could possibly make accurate comments on the development of GTA VI would be those who are working on it, and maybe not even them.
My impression from the leaks and rumors over the years, is that most people working at Rockstar Games are not aware of many of the aspects of their upcoming games. This type of “confusion” doesn’t even have to come from active efforts to keep people in the dark: I work for a company that’s roughly one tenth the size, and I don’t know basically anything about most of the ongoing initiatives – and this is at a place that’s very internally transparent, and where I could easily get information about anything that goes on in the business by just asking… and by paying a bit more attention in all-hands meetings. There’s usually just so much information that isn’t all that relevant to one’s specific role, that it’s easy not to be well-versed in it all.
Note that the development of GTA VI is uncharted territory even for Rockstar Games themselves. This is their usual form, anyway: I think that for every game they’ve made, they’ve never significantly repeated their own development processes, and their ambition has always increased significantly. The story of how VI will have come to be, will certainly be quite different from that of IV and V. Between the lack of visibility most employees probably have into the details of how the process is going, and the fact that it is a complex and lengthy one with plenty of space for small changes to be made still, I wouldn’t take any insider accounts as gospel, even when they are truthful to the source’s perceptions.
One recurring comment I’ve seen online is, paraphrasing, to “let them [Rockstar] cook,” when it comes to discussing leaks, rumors and personal wishlists for the next GTA title – as if public comments were detrimental to the game’s development. The way I see things, we can comment and speculate all we want; people always have, people always will. Of all game developers, I feel that Rockstar is one of the best at shielding themselves from outsider opinions about in-progress work, and the best at only showing to the public what they actually want the world to see (extraordinary circumstances, like the 2022 leaks, notwithstanding). As long as people are not being disrespectful or harassing employees, I doubt speculation is causing any harm, and everything insider sources have ever shared about the topic points to them having great fun with how wrong the fan theories often are.
Some of the rumors currently going around take for granted that GTA VI is currently playable from start to finish and that the game only needs “polishing.” While I definitely believe the former, I highly doubt the latter, unless we use a extremely broad definition of “polishing.” That the story can be played through doesn’t mean that more secondary content like side-missions, open world activities, soundtrack production, and even transversal aspects like localization are finalized – and I don’t think the work on those would count as mere “polish.” After all, there is so much to a GTA game besides the more linear story aspects.
Other rumors point towards the story not even being finalized yet, with the final chapter(s) being stuck in development hell. For me, it is very difficult to believe that the story doesn’t have a conclusion, or set of possible conclusions, written yet. However, I can picture scenarios where the final missions are difficult to realize exactly as originally written, or where the developers have trouble getting these last moments to evoke the desired emotions, and the additional iterations required to deliver good storytelling are causing some churn. I can also imagine a scenario where the story has different possible endings, Rockstar wants the post-credits gameplay to be affected differently by the consequences of those endings, and it’s the realization of these consequences that is taking more effort than expected. Through a game of broken telephone – and a lot of these rumors seem to come from sources that know sources – the rumor would end up becoming that the final chapter is in development hell.
It’s interesting how we know so little about the development of such a giant project that each of us can easily believe rumors that are essentially incompatible with each other, while we are less than a year from the currently targeted release date and have seen two trailers already. It’s particularly amusing, given that the game’s “unofficial trailer” consisted in so much leaked content, including plenty of developer captures originally shared in Rockstar’s Slack workspace. GTA VI is so vast that this much content still doesn’t come close to even telling half of the story.
Personally, I highly doubt Rockstar would need an entire year just for “polish,” particularly since we’re talking about a company with more than sufficient people to tackle many different problems simultaneously. But what “polishing” encompasses varies between points of view, and Rockstar has used this word in the past as a way to justify delays without having to commit to any details. They did so when delaying Red Dead Redemption 2 for the second time, and previously when delaying the original release of GTA V on PC. I don’t see “polish” as anything other than marketing speak for “it isn’t done because it isn’t done.”
The fictional past doesn’t explain the fictional future
Despite having a radically different setting than GTA VI, Red Dead Redemption 2 has been used as the basis for some of the fan expectations and theories about the how the upcoming GTA title is going to play, all the way from particular game mechanics, to the way the story will unfold.
This very much comes down to personal taste, but I am not too enthralled by the notion of a GTA game thats feels too much like RDR 2 did. The level of world detail and visual quality of the latter is top notch, and the GTA VI trailers show that we’ll continue to see improvements in this front. But I wasn’t a fan of the clear separation of the chapters in the story of RDR 2, nor the generally slower-paced storytelling. Despite its world having more structured activities to pursue, even after the completion of the story, I think its setting and the design of certain game mechanics didn’t encourage the “go anywhere, mess around and find out” type of gameplay that’s been a staple of GTA games since the first 3D entry in the series.
Don’t get me wrong, there are plenty of ways to have that chaotic sandbox type of fun in RDR 2 – and one can argue that the absurdity of the chaos is only increased by the serious world tone. However, to me, RDR 2 is at its best when it is either in peaceful mode or when it is presenting mission-driven action/combat; more violent emergent gameplay didn’t feel so good to me, causing very a palpable ludonarrative dissonance that took me out of the immersion. I never felt that with the same intensity in a GTA game; maybe I’m just bad at playing as a low honor Arthur? With this said, I appreciated the more serious stories of RDR 2 and of GTA IV, compared to that of V. My hope is that Rockstar will be able to once again tell a dramatic high-stakes story, while making it feel more action-packed than RDR 2 felt to me, and while still allowing the unfettered chaotic fun moments to naturally take place.
I don’t think all of the fan favorite mechanics of RDR 2 would work well in a GTA game and my view is that the two series have different audiences. In practice these are not wholly disjoint groups of people, but when someone goes to play a GTA game, they’re often looking for an experience that is not exactly that which the Red Dead Redemption games offer; in each of those moments, that hypothetical player may as well be two different people. I would almost argue that the essential aspects of GTA gameplay have to be somewhat “simplified” compared to those of RDR 2. It’s also important to note that the audience for a GTA game is broader than that of RDR, if for nothing else, just because the former is a much more popular brand. Naturally, Rockstar wants players to enjoy their purchase and I would be slightly disappointed but not surprised if, for the lack of a better term, they decided to “dumb down” the core mechanics of their GTA titles compared to those of RDR.
For a specific example of a possible mechanic I am not too excited about, one rumored change in GTA VI is that instead of carrying all weapons at all times, each playable character will have a more limited inventory and there will be relatively frequent opportunities to make changes to those inventories. Essentially, road vehicles would be what horses were in RDR, storing your other weapons and some inventory items. This would require players to think more actively about what weapons to carry at each time – making things more complex, which I am not sure is a positive change. But I also see how this would improve realism and allow the “personal vehicle” to have more impact in how the game is played. In prior GTA titles, outside of specific missions, what vehicle you drove in the open world didn’t really matter, so such an inventory system could be a way to make the concept of the personal vehicle more relevant. It’s an increased “level of detail” for sure, but I suspect such mechanics would feel limiting in those moments where one mainly wants to mess around in some power-tripping fever dream.
Betrayal was one of the main themes of the RDR 2 story and it is already understood, since the first trailer, that the story of GTA VI will revolve around trust. Many people seem convinced that, in the style of what happened in the story of the 2018 release, the two protagonists will betray each other, either out of their own volition or because external forces lead them into it. Two of the three possible GTA V endings also consisted on protagonists turning on each other. If that indeed turns out to be the main plot device once again, then the fact that “everyone” has seen it coming is enough to explain why I’d be disappointed. I am hopeful Rockstar will avoid repeating the same note in essentially the same way, and will be able to deliver something more surprising and equally as intense.
This cutting room floor can fit/fix so many leaks
With such a humongous budget and prolonged development timeline, it’s safe to assume that GTA VI will also have the greatest amount of what’s colloquially described as “cut content:” early concepts that didn’t pan out, removed gameplay mechanics, abandoned plot arcs in the main story, world locations that never came to be, unused voice lines, sound effects and original soundtracks… there are endless categories of things that could fall to the floor of the cutting room of this type of game. In fact, there’s probably enough space in the development history of VI to fit the production of an entirely different game, and if what has been rumored about “Project Americas” is to be believed, that’s sufficiently close to what happened, at the very least, pre-production wise.
For those unaware of this “Project Americas,” because not too much is known for certain, a quick summary follows: this was, allegedly, an early concept for GTA VI that was in (pre-)production from as early as 2012 until circa 2020. The action was meant to take place, at least in part, throughout the 70s, 80s and 90s and in multiple cities from both North and South America, with the main theme allegedly being the coke trade. Some aspects of these allegations have been confirmed by reports from respected journalists like Jason Schreier and Stephen Totilo, but they’ve also denied some of the associated theories floating around. According to this line of speculation, at some point, these initial plans were thoroughly changed or exchanged for ones with a more modest scope, leading us to the current GTA VI concept that entered full production at some point between 2020 and 2022.
Within GTA datamining circles, some believe that many of the assets made for this earlier game concept ended up being repurposed and adapted for GTA Online’s Cayo Perico, a location which conveniently has a South American theme. It is unknown whether the upcoming GTA game retained the “Project Americas” codename, or whether we are free to use that to exclusively refer to the allegedly canned plans – why do people with access to Rockstar insiders never think of asking these pressing questions?
Personally, I believe that Dan Houser’s departure from Rockstar Games in March 2020 may be connected to this change of plans, but the direction of causality is unclear. Regardless, even though I can’t quite explain why, I find more appealing the idea of yet another GTA taking place in the present day, than that of a historical piece like the original Vice City. I would like to see Rockstar explore that original Americas concept, but within its own new IP – if they’re ever going to move away from GTA and RDR, that is.
If the development of VI really was kind of rebooted at some point, certainly that doesn’t mean that the previous effort was completely wasted: for example, progress in areas like the core of the game engine and development tooling is always going to be cumulative, and as mentioned, it is believed that some of the world design and asset modelling efforts were repurposed for GTA Online, while others were likely still valid for the current Vice City concept. A well-preserved 80s car is supposed to look and sound the same no matter if it’s 1985 or 2025, after all.
With Rockstar being busy with the release of GTA V and the development and release of RDR 2, I find it unlikely that “Project Americas” ever went significantly beyond a pre-production phase. To enter full production, I believe that most resources would only have become available after the release of RDR 2. This means that “Project Americas” received full-steam development focus for just one to two years before the supposed change of plans. In decades past this would be enough time for Rockstar to make a hit, but in the latter half of the last decade, that was really not the pace and scope they were aiming for.
So why would this intriguing concept not move towards a final product? One hypothesis is that, as more concrete aspects came together – including, perhaps, playable vertical slices – they did not pass the internal vibe checks, prompting a deep rethink of the entire thing. Another possibility is that writers and stakeholders, due to irreconcilable visions regarding what ideas would be most commercially successful, could not agree on a finalized concept for the story and setting – this would be the narrative where Dan Houser’s departure could be connected, but we have no proof of such connection.
For the hypothetical reboot of the GTA VI concept, the justification I favor the most is really just that of scope management: Rockstar likely realized that, with the great-but-certainly-not-infinite resources and time at their disposal, they could release three or four GTA Vs worth of content in that single game – one Los Santos’ worth per each desired combo of city/historical setting – but without being able to meaningfully advance the quality of the storytelling and gameplay within each of these separate settings. Instead, they decided to focus on a single location and time period as per usual, to deliver something that is, simultaneously, undoubtedly perceived as “next generation” while also being “safer” from a business perspective – matching what, I wager, are the expectations of most current GTA fans for what a GTA game should be.
I wanted to explore what might have become of the alleged “Project Americas” mainly to drive home the notion that, if rumors are to be even just partially believed, Rockstar is not afraid to give up on concepts that they’re not particularly happy about, perhaps even shelving good parts of multi-year efforts. Therefore, just because something was seen in the infamous leaks of 2022, or even any subsequent leaks, that doesn’t mean that something is confirmed to be in GTA VI. The cutting room floor for GTA VI is definitely more expanded and enhanced than any published GTA V edition ever was.
Lots of time, lots of money and lots of people contributing towards the same project allows for plenty of experimentation and perceived “waste.” Entire concepts, mechanics, features, locations, characters, story lines… they may all come and go, and come back only to be abandoned again. Incompatible options being worked on in parallel, explicitly to be pitted against each other, with just the best fit surviving. With this in mind, it’s expected that many of those working on the game still don’t actually know what will make the final cut. Besides, the videos and information obtained from the intrusion into Rockstar’s Slack space in 2022 showed lots of content that was already outdated even back then, so imagine what may have happened after three more years and with one still left to go until release.
Fan initiatives like the GTA VI mapping project have taken much of what these leaks showed as fact, as proof that certain features will be in the final product, or that certain parts of the world will look a certain way. While, map-wise, the trailers and official screenshots have been mostly consistent with the leaks, the relative stability of the already known parts of the map doesn’t tell us anything about the many other aspects of the game. For example, some fans became convinced that the playable characters would be able to go prone, because it was shown in a leaked test video, but it could be something that was never finished, which was later removed for the sake of simplifying player movement options, or which will only be available in very limited scenarios.
Similarly, features that exist in prior Rockstar titles don’t necessarily have to be present in GTA VI, and even when they are seen in those leaked videos, for all we know, they might be visible there only because they were already implemented from a previous game. Later, someone may decide to remove them, or maybe they break as development goes on, and fixing them isn’t considered a priority. For a random example: bowling did not reappear in GTA V.
In the past, I wondered whether the 2022 leaks would lead to Rockstar changing aspects of the game so that it wouldn’t be as spoiled by the leaks, or to claim victory over the hackers in some way, by purposefully invalidating the extracted information. We now know that the main protagonists haven’t changed, but some of the most recent rumors – which I don’t find particularly convincing – mention that certain side-missions featured in the leaks have been cut. If that’s true, I don’t think the leaks will have been the main motivation. In the particular example mentioned by that gossip, the relevant leaked-and-allegedly-cut dialogue involved Jay Norris (parody of the Zuckerberg-like personality), so perhaps the true motivation for this particular removal had to do with real world developments around social networks like Twitter/X and TikTok. Regardless of whether the leaks led to changes in writing, the combination of the ever-expanding cutting room floor with the release of more marketing materials will gradually decrease the relevance of the improperly publicized data. This will, quite literally, fix the leaks.
In addition to people’s ideas changing over time, people working on the game also come and go. I mentioned Dan Houser’s departure, and there was also Lazlow Jones’s departure, and years earlier, Leslie Benzies’s not-so-peaceful departure (who went on to direct MindsEye, the infamous self-inflicted disaster of a game). These are just the well-known names; plenty of other people, most certainly including some people in positions of artistic direction, have come and gone over the twelve years since GTA V’s release and nearly seven years since RDR 2’s release. As people come and go, ideas gain and lose champions; within Rockstar, the opinion about what a present day GTA game should be like will keep changing all the way up to the release of VI, even as an increasing number of aspects are gradually finalized.
Between the possible canning of the entire original concept for the game and the more mundane iterations all game aspects go through, I can’t help but wonder if, once the dust settles, we will end up feeling that the breadth and depth of GTA VI doesn’t represent what was expected of a twelve year wait for a sequel. Sure, we must keep in mind that RDR 2 was developed and released in the meantime. But even with that in account, we must acknowledge that the passage of time, by itself, introduces inefficiencies and confounds the development process.
Besides the mentioned evolution of ideas coming from within Rockstar, the world outside their studios also kept advancing throughout this decade: technology capabilities, players’ expectations, and investors’ expectations are all amplified now. It’s not like Rockstar will take their ideas from the mid 2010s, get thousands of socially distant monks to put ten years of linear effort into their next game, and release a game from 2013 in 2026. They certainly have spent some effort just keeping up with the times, and that may have caused more back-and-forth than anyone will be able to accurately account for.
We can’t do anything about how much and what content gets cut, nor about whether our most desired combination of concepts and features ends up making the release, so I think the next best thing we can hope for, is a repeat of what happened with GTA V, where some of the cut content and mechanics eventually were repurposed or reimplemented in later expansions. And if I’m allowed to dream a bit more, then let’s hope they won’t be exclusive to Online.
It won’t be a Grand Kitchen Sink
It’s impossible to throw every possible gameplay mechanic, every single movement feature, every imaginable story arc at the same wall, and have them all stick the landing. I am not sure GTA VI was ever intended to be the “everything game” the way some apps want to be the “everything app;” such a proposition would definitely appeal to shareholders, and may even sound nice to many players, but realistically it’s impossible to define.
What genre would an “everything game” be? If one nevertheless tries to make such a thing, I think the result would be confusing to play, would struggle to tell any story, and could be too overwhelming to not even be a good sandbox/simulation game – more or less as soon as players reached the part where one, after dusting the virtual pantry in a detailed minigame of sorts, has to realistically craft their explosives from household parts… eventually getting interrupted and having spend real years in fake prison, from which an army of space units can be assembled in order to break us out, but only after we build an efficient factory to turn raw materials into such units (alternatively, one can post bail with real credit cards!). Oh and there would be chess, and poker, and soccer, and real money gambling somewhere in that “everything game,” too.
It’s easy to ask for features when we don’t actually have them all implemented to try out simultaneously. But one may argue that some of the mechanics people commonly mention – like limited weapon inventories, realistic vehicle fuel consumption, more complex police behaviors, an economy with more depth to it, more varied and impactful character customization options, and so on – have actually been implemented, and in GTA V no less, by modders, including in roleplay (RP) servers. This is notable, as some of these things sound like they’d be more challenging in multiplayer contexts. If motivated hobbyists can make things happen, why won’t Rockstar?
It’s true that GTA RP has been very successful, but is it the type of experience that the next game should be designed around? I don’t think so. I don’t believe that such mechanics are sought after by the majority of the GTA audience. I doubt most people look to GTA games as a general purpose “real life simulator” or even “profession simulator,” which is what, from my point of view, most RP servers try to be (to different degrees of depth or “seriousness”). My belief is that most players want GTA to present a good diorama of the real world with some good satire, some violent action and some appealing human physique mixed in, but they don’t want to simulate anyone’s real lives in great detail; that’s not what most people look for in fiction, anyway. The fact that the world and base mechanics of modern GTA games make pretty good basis for RP games is coincidental.
I think most roleplayers would agree that some of the mechanics found in RP servers would get in the way of effective storytelling and would generally reduce the entertainment factor if they were a mandatory aspect of a GTA game. Even considering GTA Online exclusively, and for all the faults of Online: I would miss the non-RP official experience if RP servers were the only multiplayer option available.
I think it is positive to allow “RP-like interactions” to happen whenever they don’t disturb the “regular” gameplay, but I am really not looking forward to a GTA game full of slow, action-blocking animations like the many in RDR 2. At the same time, I will be slightly sad if certain aspects of real life are not better represented in GTA VI, with the aforementioned fuel consumption being one of them. Balancing all systems will be the art of the craft. Maybe that’s what they mean by “polishing,” after all, in which case I can definitely see one year worth of work feeling short.
Then there are features which don’t really interfere with any other features, and whose presence will be up to how Rockstar decided to allocate resources, specifically, how much they’ve decided to spend on things that are not essential to the storytelling. One example is the sports activities: in theory, the game can contain everything from fighting to basketball, including tennis, football and soccer, golf, heck, even esports could be featured! Would it make sense? Probably not, but would it be impressive from a “look how many things this game has” perspective? Definitely!
The Politics Policy Police
For a game to act as a great diorama of the real world with some satire mixed in, it is inevitable that it will contain some references to the societies we live in. I’d say most story-driven games feature some level of social and political commentary. Sometimes it is more obvious and on-the-nose, sometimes it’s just in the subtext, or left to interpretation. Games by Rockstar have contained satire and commentary of all varieties and subtleties, and it’s impossible for GTA VI to not have some of it too.
The very mention of “Theft” and “Auto” in the title of the series implies that automobiles and crime will be involved – what does that say about our societies!? It definitely indicates that there is private property, crime, and automobiles, in our societies, and since it’s a game title, it implies that such aspects are liable to be portrayed in a video game! That is political commentary. Unfortunately, I doubt a game series called Grand Theft Train would have found the same level of success. What does that say about our societies?!! I can’t believe anyone would make such a conservative, consumerist game that simultaneously glorifies car-centric planning and has a biased recommendation of which vehicle types are worth stealing!
We know GTA VI will, once again, take place in a fictional version of the USA, in a present-day setting. Therefore, many current US-centric themes are to be expected. To people in some cultures or with certain political orientations, this focus in the US is, by itself, considered political commentary; what does that say about their societies?! I’m hearing this shtick is overused, so let’s move on. But really, it’ll be impossible for the next GTA not to touch politics, even if inadvertently, and nobody will be happy about how it will do so: if the game’s satire is more subtle than previous titles, some will complain “they’ve gone soft!” If it appears to be too overt in its criticism or very targeted in its mockeries, then, depending on the perceived political leaning of those artistic expressions, some will either complain that “it’s woke!” or that it has caved to corporate/conservative/right-wing interests. GTA VI could be perfectly balanced when it comes to criticism of the entire political spectrum, and many would still only see the parts that don’t align with their vision.
As usual, Rockstar’s mission will have been complete if absolutely everyone is outraged, and yet nobody resists playing the game, even if just to see what it’s actually about. I think everyone agrees that this developer likes to push the envelope of what mainstream games can feature, particularly in their GTA titles; to cause some controversy, or at least to spark public discussion, seems to have been a secondary goal for every game in the series. Sometimes the controversy doesn’t quite have the expected causes and gets out of control, like what happened in the infamous Hot Coffee case, back in the San Andreas days. Still, some healthy amount of it only helps the marketing efforts.
To tell the game’s main story, GTA V probably didn’t need a mission where players torture a random guy. However, the impact of the political commentary intended in it, would have been greatly diminished if the mission didn’t play out like that. Some players would definitely have preferred if such commentary remained more on the sidelines, where it could be more easily ignored. For many artists, one of the goals of their work is to evoke emotions and thoughts, but if the art can be easily ignored, then it can’t really do that. That mission is like guerilla street art that’s ugly and unnecessary, and yet impossible to ignore, created specifically to cause the public to react, to “feel something” – as cliché as that sounds. I hope GTA VI tries to make people feel something too, preferably without resorting to shock value.
In ten years, GTA VI might be seen as less of a period piece than how IV and V are seen today. While GTA has never been no latest news commentator, this next installment might focus even less on current topics and might even end up feeling like a “safer” art piece than previous titles. If that turns out to be the case, I don’t think it will have been necessarily due to Rockstar “going soft,” or because of pressure from investors, or anything like that. I think it’s more that recent years feel very fast-paced both within the US society and also worldwide; keeping up with such developments would be challenging in the context of the extended development period. Foregoing such actuality might be the only way to reach a decent end result.
Just five years ago, some people were rightfully worried that our collective stupidity might not outlast a virus; would you be satisfied if the representation of such topic went beyond a secondary mission or two in the world of Vice City? The problem with trying to capture the current world in a GTA game is that it could cause the game to age quite quickly and relatively badly, coming across as being tone deaf and out-of-touch with present times, rather than as good satire of a certain period of modern history. Certainly, Rockstar isn’t developing VI with the intention of releasing VII just two years later; they want the game to remain relevant and palatable for eventually as long as GTA V did, and in that sense, it would be prudent to focus more on the constants that never change, instead of capturing and mocking this decade’s personalities, recent trends and scandals. And what better place to portray people’s vices, than Vice City?
Going back to the alleged concept of “Project Americas,” I wonder if the idea to revisit the later decades of the 20th century was motivated by the difficulty of dealing with the uncertainty of the present time in the face of a lengthy production process. This excerpt from an interview Dan Houser gave to GQ Magazine two days prior to RDR 2’s release provides a lot of insight into how they were seeing the current world, back then:
Dan Houser is “thankful” he’s not releasing Grand Theft Auto 6 in the age of Trump. “It’s really unclear what we would even do with it, let alone how upset people would get with whatever we did,” says the co-founder of Rockstar Games. “Both intense liberal progression and intense conservatism are both very militant, and very angry. It is scary but it’s also strange, and yet both of them seem occasionally to veer towards the absurd. It’s hard to satirise for those reasons. Some of the stuff you see is straightforwardly beyond satire. It would be out of date within two minutes, everything is changing so fast.”
The “age of Trump” never really went away, and Dan Houser left Rockstar before his first term was even over. Regardless of whether, at the time, Rockstar were pre-producing the alleged “Project Americas” concept or something else, Dan Houser’s comment remains painfully relevant today: reality is imitating satire to the point where it’s sometimes a carbon copy, and jokes can become unfunny within hours.
Dan Houser was clearly aware that any GTA released then – as now – would be seen in a political light by the public, and seemed a bit unwilling to handle that in the political climate from 2018. Both within the US and globally, that political climate wasn’t even as hot as the current one! I imagine Rockstar’s appetite for dealing with that particular type of mixed reception hasn’t increased dramatically since then, and that’s why I bet that most satire and mockery in GTA VI will try to avoid overtly sticking criticisms to either side of the political spectrum.
In the end, it is impossible to please everyone when it comes to making a representation of our current world that’s meant to be simultaneously realistic and satirical. The more realistic the “diorama” looks, the harder the satire hits and the more outrage it will cause. To be a commercial success of the expected size, the next GTA needs to appeal to as many people as possible; if it ends up being more muted in the political commentary, in my opinion, that won’t necessarily be a bad thing. Many use games to take a break from the more complex aspects of the real world, and if politics aren’t as obviously present in the game, maybe it will manage to be a better safe harbor for those players.
How to sell Moore copies
We have gone through a total of five releases of GTA V: first for PS3 and Xbox 360 in 2013, followed by a PS4 and Xbox One release in 2014, the first PC release in 2015, the PS5 and Xbox Series release in 2022, and the PC Enhanced version in 2025. Unfortunately for consumers, only the latter was fortunately free for owners of the previous PC release. These many re-releases certainly help explain how the game sold so many copies.
This staggered approach to multi-platform launches, particularly the PC release, has been a staple of the GTA series and Rockstar titles in general, since essentially their first games. It has been rightfully criticized by many, who claim it is mostly used to encourage players to buy the same game more than once. This is something that they may want to do for reasons other than convenience, as the later re-releases tend to contain various improvements and additional features over the initial release. Not to mention, the PC version has traditionally enabled game modding – something which I hope will remain practically possible in VI. Unofficial, unvetted modding will always be superior to whatever moderated content creation tool Rockstar might allegedly be building into GTA VI.
A decade later, it’s sufficiently evident that GTA V was held back in multiple aspects, for having initially released on the seventh console generation – that of the PS3 and 360. One of the only controversies surrounding the GTA V trailers is about the high number of trees and other vegetation showcased, that was pared back before release – perhaps to get the final product running decently. Players have also criticized the simplified physics and the not-as-destructible world elements, compared to those of the predecessor – compromises probably made to free up enough compute performance to realize new mechanics and the otherwise more detailed world. RDR 2 shows some of the aspects in which GTA V would likely have been grander, if it had released on the eighth generation exclusively.
Well over a decade passed since Moore’s law started being declared dead, and we are two console generations past the one where GTA V debuted. The increased costs of PC hardware, particularly GPUs, reflect both the increased cost of manufacturing chips on bleeding edge processes, and the increased demand for GPUs outside of the gaming segment. It is highly likely that the next console generation is going to bring less of a performance leap than prior generations, or that the current one is going to stick around for longer, or even that the next generation won’t be nearly as affordable – might the Nintendo Switch 2 pricing be an early taste of this?
My prediction is that GTA VI will age even better from a technical standpoint than V did, barring unforeseeable improvements in hardware capabilities over the next decade or so. In the graphics department, the current console generation supports just enough raytracing acceleration to justify the use of a rendering pipeline that takes advantage of it, and the hardware architectures and APIs of current consoles are very similar to what is presently available on PC. One of the main areas where GTA re-releases have presented improvements over the earlier ones is graphical fidelity, and when it comes to this aspect, I imagine that between the console release and the PC release, Rockstar won’t have to make many adjustments besides those mandated by quality control and those needed to offer additional graphical options.
As soon as a PC release is out, I believe that players who have the means to play that edition properly, tend to prefer it over the console releases. (The only reason why this wasn’t always the case with GTA V, had to do with more cheating shenanigans in Online on PC compared to consoles, but I am making the assumption that Rockstar will have that sufficiently tackled in VI). The thing about the PC platform is that it doesn’t typically make consumers buy software again in order to take advantage of hardware improvements, and therefore Rockstar’s ability to indirectly use Moore’s law to sell people new copies of the same game might be diminished, if the market share of the PC platform keeps increasing – either through conventional desktop and laptop PCs, or through the new type of handheld PCs pioneered by the Steam Deck. Fortunately, with GPU prices being the way they are, it isn’t certain that PC will keep growing, but then again – it’s also uncertain whether consoles will remain affordable.
The rumor mills of the Xbox variety suggest that Microsoft may be preparing to make it so that Xboxes are more like PCs, and at a limit I can see them becoming just Windows PCs running under a dedicated mode (possibly like the “S mode” in Windows 10 or 11), turning the Xbox brand from bespoke hardware consoles into more of a badge certain PCs can wear – much like Valve’s old concept of Steam Machines, except powered by Windows. In such a scenario, Rockstar Games would also need not re-release updated versions of titles to take advantage of new Xbox hardware, because Xbox generations as we know them today would likely cease to exist. Rather, the certification baseline (that is, the “minimum requirements”) for hardware to have that “Xbox badge” would just keep increasing. While this is relevant as far as long-term speculation goes, I wouldn’t expect such moves from Microsoft to have a meaningful impact in the launch strategy for GTA VI, certainly not for the first couple years of its life.
Regardless of the foreseeable hardware improvements, I believe it is safe to assume that a large part of the GTA VI budget must have been put into future-proofing the technical core of the game and also its user interface, such that both can better stand the test of time. It’s unlikely that the original GTA Online was planned to be maintained for over a decade, and Rockstar had to improvise some things as they went, taking care not to break the story mode and functionality like the Rockstar Editor in the process (with mixed results, I must add). There’s also a large collection of inconsistencies and small bugs which, probably, only came to be due to architectural inefficiencies.
I wonder about all the features that might have been brainstormed for GTA Online at one point and were never actually pursued, due to the game not being really prepared for them, especially on the earlier, more limited console hardware. Tenuous rumors, originating from patents filled by Rockstar, point towards the possibility of the GTA VI map receiving significant expansions, and the virtual world generally being more prepared to change over time, possibly beyond what the engines of their previous games were really designed to handle.
My personal wish is that in addition to new and refreshed locations, ideally Rockstar would have a vision for GTA VI updates that would allow the introduction of new gameplay mechanics, without them feeling tacked on. With GTA Online, we saw an interaction menu that kept increasing in complexity indefinitely, until it looked more like a debug tool, than the primary way for players to interact with so many new features. There was also the problem, since essentially day one, that content and options were spread throughout three menus (the main one, the interaction one, and the phone), hurting discoverability. At one point, Rockstar began to remove older, unpopular content in an effort to make mission selection menus more friendly – or at least that was their justification. These problems with feature interaction and content discovery are the sort of thing I hope they will definitely fix in VI.
It would also be undoubtedly cool if such updates were made available to story mode, rather than keeping the single-player content relatively frozen in time, like what happened in GTA V. For a baffling example, Rockstar added more radio stations to Online over time, and even though the first few of these additions were available to story mode, the more recently added ones aren’t. However, realistically, I only see that type of evolution happening if the single-player and multi-player modes are more intertwined in GTA VI than they were in V, and that has other implications I am not too optimistic about.
Check expectations/speculations
Why, oh why, am I doing this? I didn’t want to write a post with my bucket list for GTA VI. I really wanted to focus more on a meta-commentary of all the speculation going around. There’s still a lot to discover, or at least confirm, about their next release. I have the impression that people often tend to over-analyze the materials that have been put out – both the official and the leaked ones – and seem to forget about all the things these materials don’t show. Especially if we ignore the 2022 leaks, which are definitely very out of date by now, we know next to nothing about most of the aspects that make an action-adventure open world game, including:
The story: we barely know anything besides the names, basic descriptions and general motivations of some of the characters, but we have no idea of whether that list of key characters is complete, nor how much of a “twist” there will be in said descriptions and motivations.
How the storytelling will take place, from a logistical and organizational standpoint: will there be clearly defined chapters? Will we be somewhat limited in what we can do in each chapter (like in RDR 2)? Will it be a mostly linear story – as per tradition – or will it actually have more player choices with impactful consequences? Will there be meaningful secondary, perhaps optional, story arcs?
How it will actually feel to play the game: will we feel enough freedom to ignore the story aspects and just “mess around” when we so wish? Will the game be as good of a sandbox as the predecessors? Will the missions continue to be mostly linear and relatively full of failure conditions as soon as you try to approach them in a creative way? If GTA VI is to have a more dramatic tone for its main story, will it still feel fine to cause a huge ruckus, including police shootouts? Will we continue to be able to save almost anywhere outside of missions, and spawn on the same place when reloading saves?
What story mode activities will exist outside of the main story: besides secondary missions, will the “random” world events continue to be mostly scripted mini-missions taking place in predefined locations? What mini-games and sports activities will be at our disposal? Will we still have prop collectathons like the ones that plagued GTA V and Online?
What Online will be like: a giant topic that we know nothing about. From what I’ve seen, most of the speculation centers around the acquisition of FiveM and one or two patents about methodology for session management (patents which, for all intents and purposes concerning any of my past or future inventions of mine or of my employers, I know nothing about). This acquisition has, sometimes, been used to argue that Rockstar will focus more on the RP style of gameplay, particularly for Online.
Regarding FiveM matters, I recommend everyone who is interested in this topic to read the long collection of information over at fivem.team. It’s an even longer write-up than this one, and some of it is speculative and likely biased or one-sided, but it shows receipts for many of its reveals, and it provides a unique glimpse into the somewhat secretive team that was cfx.re/FiveM prior to acquisition, and also into the small part of Rockstar involved in those matters. It is the reminder this essay is otherwise lacking, that not everything Rockstar touches is gold, in fact, it’s sometimes the contrary – and people, even inside Rockstar, have gotten hurt. If you, like me, thought that the illusive personality known as “NTA”/”NT Authority” was a bit… controversial, but didn’t know much beyond that, you are in for a treat that will change the way you perceive not just NTA, but also other personalities involved in different multiplayer game modding projects over the years.
Many of the more technical aspects beyond “graphics” and “attention to detail:” what will different weather conditions look like? What’s the vehicle destruction model like? How do NPCs react to our actions? Lots of speculation going around, very little official information.
I have personal preferences and wishes regarding many of these topics, but I don’t think they quite reach the point of being expectations. Particularly when it comes to the official marketing assets released so far, people tend to read between the lines, extrapolate a bit more, and then set expectations that may not be realized. Motivating these thoughts and discussions is pretty much the point of releasing such materials, but when it comes to a highly anticipated title like this one, the speculation reaches levels that, in my opinion, probably go a bit beyond what’s desired by the game publisher.
In this age of review bombing, “influencers,” instant communication, decent refund systems (in decent stores/jurisdictions), which make first perceptions matter more than any retrospective, no reputable publisher would want the first reaction from players to be one of disappointment, especially when the game in question is all everyone in this space will be talking about during its release month – making it so that any disappointing aspects would be endlessly parroted. With the massive anticipation for GTA VI and the budget behind it, a final product that clearly doesn’t match what is in the trailers, or which is otherwise troubled – like Cyberpunk 2077 was at launch – would not just cause enough damage to the GTA and Rockstar brands to trigger a sell-off of the TTWO shares; it could very well cause another video games industry crash.
I’ve been mostly dismissing the opinions of those who seem convinced that the GTA VI trailers are all a great con; that Rockstar is attempting to replicate Ubisoft’s “success” when the latter showed trailers for the original Watch Dogs, that greatly misrepresented different aspects of what the final product would be like on contemporary hardware, particularly when it came to graphics. I can’t come up with a good reason why Rockstar would consciously opt for this strategy: any benefits of doing so (like increased day one sales) look like they’d be completely undone by the aforementioned reputational hit (which could have impacts over the intended multi-year lifespan of the game, particularly the multiplayer portion). But then again, with the amount of games that seem to release nowadays in a bit of a bad quality control and performance state, I definitely see where some of the worries come from.
Rockstar has historically produced final products that exceed the earlier marketing previews in most aspects, products which tend to be extremely competent in both the technical and artistic departments. However, the great debacle of the “Trilogy Remaster” – about which I could easily write its own entire essay – has understandably soured many people’s mouths, including mine. I am absolutely convinced that the technical story for GTA VI will have nothing to do with that, and will be in line with Rockstar’s usual form, the one that squeezed GTA V into the PS3 without making the type of haphazard compromises that Ubisoft had to make to squeeze the first Watch Dogs into the same console. Briefly, the reasons: there are orders of magnitude more technical effort behind GTA VI than behind the infamous remasters; the remasters were not a flagship product the way an original new game is (they always felt to me like more of a cash grab attempt); the remasters were handled by a quasi-external studio, unlike VI which is the current priority of essentially all of Rockstar’s studios working in tandem; the remasters had to preserve much of the behavior of two decades old code and integrate it into a modern engine, while VI will be no such frankenstein. And finally: I highly doubt they’d run the risk of fumbling two major releases in a row.
So I do indeed have some baseline expectations for what GTA VI will be like – as the title of this essay indicates, I expect it to be a masterpiece. I just try not to have too many expectations about the specifics of what such will entail. To not have expectations is even better, and easier, than to keep them all in check – a wild thing for someone who’s mainly writing speculation to say, I know. Trying to have this sense of detachment makes the wait for the final product definitely more boring, and can easily come across as pessimism. At the same time, I don’t want to completely ignore everything leading up to the moment I play the game, especially when that would require that I stop following the news and discussions about topics that interest me a lot, for years!
My initial contacts with the first GTA I played by myself, GTA V, took place well after its launch. Prior to playing it, I hadn’t consumed any of its promotional materials, read any of the anticipatory discussions, not even any post-release reviews. This was because I only really started playing such major game releases when I finally put together a sufficiently powerful PC in the latter half of the last decade, and prior to that, I really didn’t care that much about games. I was obviously aware that GTA V was a major best-selling release – hence why it was one of the first games I played on that PC. Still, I had no expectations besides the general idea of what GTA gameplay vaguely looked like, mostly from seeing colleagues play very butchered versions of GTA: SA on school computers, nearly a decade earlier.
In retrospective, and having now looked at its promotional materials and knowing more about all the pre-launch anticipation and speculation, I think that being able to dive into GTA V from a point of nearly-zero knowledge was a better experience than if I had eagerly awaited the game for years. I can’t quite explain why I feel this, but I wonder if this is why, besides pure nostalgia, many people prefer the first GTA they played? Or even the first open-world action-adventure game in the same genre as GTA? Because they went into it with fewer expectations and “dreams,” not even those originating from playing a different game in the same genre, and therefore had more jaw-dropping, mind-opening or just plain fun moments than they would otherwise?
I suppose I am trying to balance the way I experienced GTA V, and later other games by Rockstar, with the natural excitement stemming from wanting more of what I liked about these games, and with my interest in the extremely long development cycle of VI. I know I will be diving into it with more expectations and more of a wishlist than I ever had for any game in its genre, but I also know I will be nearly a decade older than I was during my first V playthrough… I have different opinions now, and much more experience with the games medium, so the circumstances would be different anyway. And following the trailers, and a bit of the speculation, does indeed make the wait less boring, even if it may ultimately make the actual final product more disappointing.
Shielding ourselves from marketing materials, fan theories, and general news about an upcoming major release is difficult once we have an interest in that particular type of game, and we are bound to create expectations, so the next best option is to keep them in check, becoming prepared for small disappointments in certain aspects. I think that some people are unable to do this in their thoughts alone, and as a way to cool themselves down, overcompensate in the “pessimist” direction and end up claiming that the trailers are “faked” in some way, for instance. Clearly, I too was unable to quietly calm my expectations, as I couldn’t forego writing this essay, indulging in speculation as I went. I’m convinced this is, above all else, a coping mechanism.
The largest ever (s)cope
Regardless of being impatient about the release of a sequel to a favorite piece of media, dreaming about what we would(n’t) like it to be helps us understand and refine our personal taste, guiding our exploration of the medium. This is also true for games, and for the GTA franchise in particular, it feels necessary. With Rockstar taking more and more time between game releases, even though such releases have enormous amounts of content to explore, people who enjoy open-world action-adventure games eventually feel compelled to explore other games, from other developers.
The exposition to the original ideas present in other games can reflect back on the way we perceived our favorites. It adds more points to the fuzzy cloud of things (mechanics, plot points, even more minute things like soundtrack ideas!) that we consider for inclusion in the imagined sequels to our favorites. It can also expose some flaws and clichés of our favorites, that we would otherwise not notice – but that only makes us yearn more for a fresh sequel, that will hopefully improve on those aspects!
Besides being harmless pastimes, speculating and developing theories about secretive upcoming titles can be an avenue for us to rationalize our wishes for them, to try to make our wishes fit in the reality of what is progressively divulged about them. I imagine these activities are most often associated with happy emotions, but they can also be ways to deal with a subconscious fear that the end product may not completely be to our liking.
I think it’s natural to wish for a sequel that matches our preferences even better than the original did, such that the sequel becomes our new favorite, or at least, one of our favorites. And when it comes to Rockstar games, if GTA VI in particular isn’t sufficiently to my liking, then I already know I’m probably not seeing a new GTA in a decade, at least… and with the way the genre is going, it’s unlikely any other developer will create a game with an equally impressive scope (CDPR with Cyberpunk 2, maybe?). So I definitely worry that GTA VI will miss the mark from the point of view of my personal, subjective, specific taste. Actually, I often paint the worst pictures in my mind (PC release not coming before 2030! No mods! Mandatory anticheat even for story mode! Shark cards in story mode!), but that’s definitely more of a “me” thing.
As someone who is quite interested in how these large projects come to be, my GTA VI daydreaming is often not so much about what the end product will be like, but about the processes that led to it becoming what it will be. This is definitely not an uncommon thing – there are plenty of communities centered around datamining specific games, sailing for evidence of cut content, beta builds, early concepts, etc., all in hopes of understanding how these products become the way they do, perhaps understanding why some of the things we most awaited in a game didn’t end up making the cut. Post-release, we seek this understanding, not to rationalize our anxiety about what an upcoming game will be like, but more to perhaps calm our subconscious disappointment over why it didn’t completely turn out the way we envisioned.
In this sense, I look forward to the small disappointments of GTA VI as much as I look forward to its jaw-dropping moments, and I’m definitely eager to learn more about its development history. We still don’t have a super clear picture of GTA V’s production process, after over a decade and multiple significant leaks, including that of the full source code. It is therefore probable that VI’s history will remain elusive for just as long. I’m sure that, while still in development, GTA VI already has a more interesting development timeline than most games made before: bigger budget, bigger wait, bigger scope, bigger drama – and more coping from fans than ever before, as we wait for, hopefully, the 26th of May 2026. Still, Rockstar Games could have it worse: they could have the mission of making a Minecraft sequel.
Important: This post was initially drafted in September 2023. I completely forgot to finish it by proofreading it and giving it a better conclusion. Rather than letting the work go to waste, I decided to post it now with minimal editing. The specific situation that prompted its creation has long passed, but I think the general thoughts and concerns are still applicable. While it is not relevant to the arguments being made, I’ll mention that since then, in mid-2024, I did finally buy and play through Phantom Liberty, which I thoroughly enjoyed.
The recent 2.0 update to Cyberpunk 2077, and near-simultaneous paid DLC release, made me reflect on the redemption arc that game has gone through. Just like record sellers like GTA V and Minecraft, it can’t possibly ever please all audiences, but I consider Cyberpunk to finally be in a state where it mostly warrants the massive marketing and hype it received leading to its release, which happened in December of 2020 – nearly three years ago. While I think the game had already become worth its asking price quite some time ago – the developers, CDPR, issued multiple large patches over the years – this recent 2.0 update introduces so many changes, from small details to large overhauls of the gameplay aspects, that it makes the 1.x series look like beta versions of a game that was two thirds into its development cycle.
In a world where so many software publishers have moved towards continuous releases with no user visible version numbers, or versioning that is little more than a build number, and considering that live service games are one of the big themes in the industry, it’s almost shocking to see a triple-A game use semantic versioning properly. And bringing semver to the table isn’t just a whim of a software developer failing to come up with good analogies: in semver, you increase the major version when you introduce incompatibilities, and 2.0 introduces quite a few of them. 2025 editor’s note: CDPR has started/resumed using “creative” minor patch numbers which I am not 100% sure strictly follow semver.
In technical terms, CDPR dropped compatibility for an older generation of consoles, and raised the minimum requirements on PC (although the game generally appears to run smoother than before, at least on hardware that was already above the recommended spec). But the possible incompatibilities extend to the flesh sitting in front of the screens: there have been major changes to character progression, balancing of different combat options, introduction of new gameplay features like vehicle-oriented combat and, although I haven’t been able to confirm this, perhaps even small changes to the consequences/presentation of some story choices. So players that were already comfortable and conformed with the way 1.x played, may be unhappy, or at least temporarily uncomfortable, with the changes made for 2.0. Personally, I am still on the fence about some changes, but to be honest, I haven’t explored this new version for more than three or so hours yet. 2025 editor’s note, hundreds of hours later: I can not presently mention any change I am not happy with.
There is no doubt that 2.0 is a major overhaul of CP2077, and I am convinced that overall, the changes were for the best, the game is a clearly better and more complete product now. This doesn’t mean there aren’t still more improvements that could be made. Furthermore, this game’s past cannot be changed, meaning that all the material needed for pages-long essays and hours-long “video essays” will forever exist. After all, there is a strong argument that the game took almost three years to be at the minimum level it should have been at release, considering the massive marketing campaigns and all the “half promises” made along the way. However, I strongly believe the game ended up becoming a better product than if its launch had been smooth; the space for the development of a 2.0 version, one that can afford to introduce this many changes to the gameplay and still be received positively, likely would not have been there if Cyberpunk had been just yet another “mildly disappointing considering the hype, but otherwise decent and competent” major release.
The main notion I wanted to explore in this essay is this perverse situation, where there is seemingly an incentive to release unfinished software as if it were finished, disappoint consumers, still end up profiting majorly and even, sometimes, with a greatly improved product to sell and plenty of praise to receive, in ways that would be difficult to achieve otherwise. “Entitled gamers” (often paying consumers) may be the noisiest about it, and gaming likely has the most prominent and recognized “redemption arcs,” but this situation permeates all sorts of software development, including areas that were once seen as “highly sensitive” or “mission critical”, such as certain embedded systems, communication systems, and finance software.
Note: it is probably a good time to remind readers that these opinions are my own and do not represent the views of my employer, nor my opinions on my employer’s views and strategy.
Software engineering can move fast and break things: the internet gives you the luxury of fixing things later. This is a luxury that not many other engineering disciplines can afford, but it is also a fallacy: you can’t issue a patch to your reputation nor to societal impact as easily as you can deploy a new version of your code.
Internet pervasiveness has been changing the risk management and product iteration paradigms around software development in the last two decades or so. In some embedded systems, the paradigm shift was named the “internet of things”, although embedded systems that can be upgraded remotely have been a thing for decades before the term was popularized – the real change is that there are many more of these now, in categories where they didn’t really exist before, such as home appliances. Connecting more and more things to the internet seemingly becomes more acceptable the easier they are to connect to a network, and many advancements were made in the hardware front to enable this.
In gaming, there is the well known concept of “early access” to clearly label products which are under development, liable to undergo significant changes, but already available to consumers. Some developers use this to great effect, collecting feedback and telemetry from large numbers players to ultimately end up with a better product. Outside of gaming, technically minded consumers have long been familiar with the term “beta testing.” Beta/early access software may be available for purchase (although, admittedly, sometimes discounted) or be provided at no cost to existing users. In any case, consumers enrolling in these programs are aware of what they’re getting into.
Over the last decade or so, I feel that users have been gradually exposed to software and services whose initial versions are less and less complete. Some of this is to be expected and encouraged, such as the aforementioned beta and early access programs that have the potential to improve the final product. But clearly the feedback from beta testing programs didn’t feel sufficient to many developers, who started including more and more telemetry in hopes of collecting more feedback, without users having to manually provide it or specifically opt into any study.
I believe the really objectionable situations are those where the barrenness or incompleteness is initially obscured and then, if users are lucky, iterated upon, at a pace which is often unpredictable. It makes product reviews and testimonials become outdated, and thus mostly useless, quickly. This development model is convenient for the developers, as it theoretically represents the least risk, as ideas, features and business models get the chance to be evaluated sooner, at a time when it is easier to pivot. It becomes possible to spread the development of complex features throughout a longer period of time, while collecting revenue and capturing market share earlier in the process.
Unfortunately, from my point of view, there isn’t much in this go-to-market approach that is beneficial for the clients/users/consumers. Particularly for products that are directly paid for (as one-time purchases or as subscriptions), I’ve often felt that one is paying to become an unpaid beta tester and/or an unwilling subject of a focus group study. The notion of simply opting for an alternative is not applicable when the product is quite unique, or when every worthy alternative is doing the same.
Then there is the update fatigue factor. After such an “early launch,” ideally, the inferior initial product is then quickly updated. But this rarely consists of just a couple updates that make lots of changes in one go. Most likely, the audience will gradually receive multiple updates over time, frequently changing the design, feature set and workflow of the product, requiring constant adaptation. Adding to this annoying situation, these updates may then be rolled out in stages, or as part of A/B testing – leading to confusion regarding the features of the product and how to use them, with different users having varied experiences which are not in agreement, which can almost be seen as gaslighting.
It is difficult to harshly criticize product developers that improve their products post launch, be it by fixing mistakes, adding features or improving performance and fitness for particular purposes. I don’t think I would be able to find anyone who genuinely believes that Cyberpunk shouldn’t have been updated past the initial release, and I am certainly not that person either. It’ll be even harder to find someone who can argue that Gmail should have stayed the exact same as its 2004 release… wait, spam filters aside, maybe that won’t be hard at all. You can easily find people longing for ancient Windows versions, too, etc.
Coming back to Cyberpunk, I think most people who played its initial versions (even those that had a great time) will agree, that it should have been released and initially advertised as an “early access” title. For many players, the experience was underwhelming to the point of being below the expectations set even by many prior indie early access titles. Those who had a great time back then (mostly those playing on PC) will probably also agree that, given all the features added since the 1.0 release, that version might as well have been considered an “early access” one too. Hence why I argue that the problem is not necessarily with the strategy to launch early and update often, but really with the intention of doing so without properly communicating that expectation.
One must wonder if Cyberpunk would have been so critically acclaimed and reached such a complete feature set, years later, if it had released in a more presentable state. I can imagine an alternate universe where the 1.0 version of the game releases with no more than the generally considerable acceptable number of bugs and issues, which get fixed in the next two or three patches over the course of a couple months. The game receives the deserved critical acclaim (that it received anyway – that was controversial too, but I digress) and because it releases in a good state, CDPR never feels pressured into making major changes to add back cut features or to somehow “make up for mistakes.” The end result would be a game where there are maybe one or two DLCs available for purchase, but owners of the base version don’t really see many changes beyond what was initially published – in other words, the usual lifecycle of non-controversial games.
It is possible that there is now a bit of a perverse incentive, to release eagerly awaited games – and possibly other products – in a somewhat broken and very incomplete state that excels only in particular metrics – in the case of Cyberpunk, those metrics would be the story and worldbuilding. This, only so that they can then remain in the news cycle for years to come, as bugs are fixed and features are added, eventually receiving additional critical acclaim as they join the ranks of games with impressive redemption arcs, like No Man’s Sky and Cyberpunk did. To be clear, I think it would be suicidal to do this on purpose, but the truth is that generating controversy then “drip-feeding” features might become more common outside of live service games.
A constant influx of changes to a product can cause frustrate consumers and make it difficult to identify the best option available in the market. Cynics might even say that that’s a large part of the goal: to confuse consumers to the point where they’ll buy based purely on brand loyalty and marketing blurbs; to introduce insidious behavior in software by shipping it gradually across hundreds of continuously delivered updates, and making it impossible to distinguish and select between “known good” and “known bad” releases.
I must recognize that this ability to update almost anything at any time is what has generally made software more secure, or at least as secure as it’s ever been, despite threats becoming more and more sophisticated. For networked products, I will easily choose one that can and does receive updates over one that can’t, and I am even more likely to choose one I can update myself without vendor cooperation.
Security has been a great promoter of, and/or excuse for, constant updates and the discontinuation of old software versions and hardware models. The industry has decided, probably rightly, that at least for consumer products, decoupling security updates from feature changes was unfeasible, and it has also decided, quite wrongly in my view, that it was too unsafe to give users the ability to load “unauthorized” software and firmware. This is another decision that makes life easier for the developers, and has no upsides that I can think of, for users. In some cases, the lack of security updates has even been pushed as a way to sell feature updates. For that upselling strategy to work, it’s important that users can’t develop and load the updates/fixes themselves.
I am sure that people in marketing and sales departments worldwide will argue that forcing the pushing of feature updates onto users is positive, using happy-sounding arguments like “otherwise they wouldn’t know what they’re missing out on.” I am sure I’ve seen this argument being made in earnest, perhaps more subtly, but: it should be obvious to everyone outside of corporate, and more specifically outside of these departments, that in practice this approach just reduces user choice and is less respectful of users than the alternative. Funnily enough, despite being used as a punching bag throughout this essay, Cyberpunk is one of the products that, despite a notable amount of feature updates, also respects user freedom the most, as the game is sold DRM-free and CDPR makes all earlier versions of the game available for download through GOG. And this is a product whose purpose is none other than entertainment – now if only we had the same luxury regarding things which are more essential to daily life.
The truth is that – perhaps because of the lack of alternatives in many areas, or simply because of a lack of awareness and education – the public has either decided to accept the mandatory and often frequent feature updates, or decided that they have no option but to accept them. This is where I could go on a tangent about how, despite inching closer and closer every year (in recent years, largely thanks to Valve – thanks Valve!), the year of the Linux desktop probably won’t ever come – but I won’t, that will be a rant for another time; you see, it’ll be easier to write when desktops aren’t a thing anymore.
Until this “release prematurely, and force updates” strategy starts impacting profits – and we’re probably looking at decade-long cycles – it won’t be going away. And neither will this increasingly frequent feeling that one is paying to be a beta tester – be it directly with money, through spending our limited attention, or by sharing personal and business data. The concept of the “patient gamer,” who waits months or even multiple years after games release to buy them at their most complete (and often cheapest) point, might just expand to an increasing number of “patient consumers” – often, much to the detriment of their IT security.
I did not want to close the year without adding something to this website, and to keep with the theme of the blog post series I might never finish, here is another post about Watch Dogs… but this one is more of an audiovisual experience.
That’s right: for comedic purposes, I used Watch Dogs to make an high-effort recreation of the first trailer of GTA VI. Despite still being just a trailer, a bunch of rumors, and a vaster-than-usual collection of leaks, that trailer may have hyped and warmed some people’s hearts more than whatever “game of the year” had – and we are talking about a year which had a bunch of very good games releasing. You can tell that piece of media from Rockstar Games tickled something in me too, or I wouldn’t have spent probably over fifty hours carefully recreating it using a game from a different series.
Soon after the trailer dropped, I got the feeling that I wanted to parody it in some way. The avalanche of trailer reaction content that came immediately after its release – including from respectable channels like Digital Foundry, who probably spent as much time analyzing the trailer from a technical perspective, than they spend looking into some actually released games – had me entertain the idea of making a “Which GTA VI trailer analysis is right for you?” sort of meta-analysis meme video. But I realized that making it properly would require actually watching a large portion of that reaction content, and I was definitely not feeling like it. It would also require making a lot of quips about channels and content creators I am not familiar with. Overall, I don’t think it would have been a good use anyone’s time: I wouldn’t have had as much fun making it, and it wouldn’t be that fun to watch.
The idea of recreating the trailer in other games is hardly original, after all, I have heard about at least two recreations of the trailer in GTA V, there’s at least one in GTA San Andreas as well, I hope some have made in Vice City because it just makes sense, and in the same vein as mine, there are also recreations in different game series, including in Red Dead Redemption and in Saints Row. As far as I know, mine is the first one made in Watch Dogs.
I did not intentionally mean anything with the use of a game whose reception was controversial because of trailers/vertical slice demos that hyped people up for something that, according to many, was not really delivered in the final game (hence the nod to E3 2013 at the start – RIP E3, by the way). Nor is the idea here to say that Watch Dogs, a 2014 game, looks as good as what’s pictured in the trailer for GTA VI, a game set to release eleven years later. Largely, I chose this game because, for one, I like Watch Dogs even if I am not the most die-hard fan you’ll find; because it is the only game other than GTA V where I have some modding experience; and because nobody had done it using Watch Dogs.
This was everything but easy to pull off: the game doesn’t even have a conventional photo mode, let alone anything like the Rockstar Editor or Director Mode in GTA V. There aren’t many mods for the games in the Watch Dogs series, especially not the two most recent ones, and the majority of these mods aren’t focused on helping people make machinima. One big exception is the camera tools that I am using, and even that was primarily built for taking screenshots – keep in mind I had to ask the author for a yet-to-be-released version that supported automatic camera interpolation between two points.
I started by recreating just a few shots from the beginning of the trailer. I liked those brief seconds of video so much, and they sparked enough interest in the modding community, that I slowly went through recreating the rest. This required bringing more mods into the equation – including a WIP in-game world editor that was released with light protection measures (probably to avoid people bringing it into online modes or adding it into shitty mod merge packs?) which I had to strip, so I could make it play along the rest of the tools I was using, including some bespoke ones.
Lots of Lua code was injected into the game in the making of this video, and as I said, this is more for comedy and that sense of pride and accomplishment, rather than any sort of game/mod showcase… but I’m happy to report that besides some minor retiming, color grading and artificial camera shake and pan effects, all shots were achieved in-engine with minor visual effects, other than the two shots involving multiple bikes on screen, that required more trickery.
Then there was the careful recreation of every 2D element in Rockstar’s video, including avatars, icons, text placement, and a hours-long search for fonts whose results I am still not 100% happy with. One of the fonts Rockstar used is definitely Arial, but with a custom lowercase Y… I no longer have those notes, but at one point I could even tell you which font foundry was most likely to have supplied the one in question. And did I mention how I also recreated the song cut Rockstar used, so I wouldn’t have to rely on AI separation with all its artifacts?
I think it was while working on the “mud club” shot that I realized I just wouldn’t be able to recreate everything as precisely as I would like. One idea that crossed my mind was to use the infamous spider tank in place of the monster truck in that shot, but I just wasn’t find an easy way to have the spider tank there with the proper look, while still being able to control my mods. Sure, there were multiple technical solutions for it, but that would have meant spending days/weeks just on those two or so seconds of video. I also wouldn’t have been able to find matching animations for the characters. So I decided to take some shots in a different direction that alludes to the setting of Watch Dogs.
Eventually, I let that creative freedom permeate other points of the video. For example, the original “High Rollerz Lifestyle” shot would have been somewhat easy to recreate (the animations for the main character in it notwithstanding) but I felt I had already proven I could recreate easy shots, so I decided to have some fun with it and instead we ended up with “High Hackerz.” Similarly, the final shot features three protagonists instead of two, because I couldn’t decide which one was the most relevant “second character” in the world of Watch Dogs.
The end result seems to have been received to great acclaim, judging by all the public and private praise I’ve been receiving. There are people asking me to continue making this sort of thing, too, which I am not sure is something I want to pursue, especially not on a regular basis – I think a large portion of the fun I had making this, was precisely because this has a sufficiently closed scope and was sufficiently distinct from what I usually do, and I suspect I would have a worse time making more open-scoped machinima, particularly in this game where the tooling is only “limited but functional.”
There are also people asking for this sort of thing done in Watch Dogs 2 rather than in the first game – but there are even fewer mods for that game, and I have even less knowledge of its internals. Judging by the title of Rockstar’s trailer, it’s likely there will be at least a second trailer, so maybe I can combine the wishes of both sets of people by then. It’s probably not something I’ll feel the drive to do, though – it will also depend on how busy I am with life by the time that second trailer releases.
As I was taking care of the last shots and editing tweaks, I was definitely feeling a bit tired of this project, and subconsciously I probably started taking some shortcuts. Looking back on the published result, there are definitely aspects I wish I would have spent some more time on. There is an entire monologue section missing from the trailer which I can pass off as an artistic decision, but the truth is that I only realized I hadn’t recreated/found a replacement for it after the video was up on YouTube. Similarly, for the effort this took, I wish I had captured the game at a resolution higher than 1080p (my monitor’s vertical resolution), because after going through editing (having to apply cropping, zooming, etc.) the quality of the video really suffers in some aspects. But the relevancy of this meme was definitely dropping by the day as time went on, and if I had spent much more time on it, not only would I have been sick and tired of the entire thing, the internet would also have moved on. It is what it is, and once again similarities are found between art and engineering: compromises had to be made.
One thing is for sure, the next video I publish on my YouTube channel is unlikely to live up to these newfound expectations, and I like to think that I have learned enough to deal with that. Meanwhile, and on the opposite note, I hope that 2024 lives up to all of your expectations. Have a great new year!
Before I start, a word about this website. It has mostly sat abandoned, as having a full-time software development job doesn’t leave me with the comparatively endless amounts of free time and mental bandwidth I once had. What remains, in terms of screen time, is usually spent working on other projects or doing things unrelated to software development that require lower amounts (or different kinds) of mental activity, like playing ViDYAgAMeS, arguing with people on Discord, or mindlessly scrolling through a fine selection of subreddits and Hacker News. While I quite enjoy writing, it’s frequently hard to find something to write about, and while I have written more technical posts in the past – this one about MySQL being the most recent example – these often feel too close to my day job. So, for something completely different, here’s some venting about a video game series – this was in the works for over a year, and is my longest post yet. Maybe this time I’ll actually manage to start and finish a series of blog posts.
Watch Dogs is an action-adventure video game series developed and published by Ubisoft and it is not their attempt at an Animal Crossing competitor, unlike the name might suggest. The action takes place in open worlds that are renditions of real-life regions; at the time of writing, there are three games in the series: Watch Dogs (WD1), released in 2014 and set in a near-future reimagination of Chicago; Watch Dogs 2 (WD2), a 2016 game set in a similar “present-day, but next year” rendition of the San Francisco Bay Area, and Watch Dogs: Legion (WDL), a 2020 game set in a… uh… Brexit-but-it’s-become-even-worse version of London. The main shtick of these games, in comparison with others in the same genre, is their heavy focus on “hacking,” or perhaps put more adequately, “an oversimplification, for gameplay and storytelling purposes, of the new delicate information security and societal challenges present in our internet-connected world.”
WD1’s launch menu background is a long video that emulates glitch art (also known as datamoshing) and features key story characters and locations.
The games fall squarely into two categories: “yet another Ubisoft open world game” and what some people call “GTA Clones.” It’s hard to argue against either categorization, but the second one, in itself, has some problems. The three Watch Dogs games came out after the initial release of the latest entry in the Grand Theft Auto series (GTA V in 2013), and GTA VI is yet to be officially announced, so snarky people like me could even say that, if anything, Watch Dogs is a continuation, not a clone, of GTA!
More seriously, there are people on the internet who will happily spend some time telling you how “GTA clone” is a terrible designation that is actually hurting open world games in general, by discouraging developers from making more open world games with a modern setting – and I generally agree with them. But I prefer to attack this “GTA clone” designation in a different way, the childish one, where you point the finger back the accuser and yell “you too!”: GTA Online has, in multiple of its updates, also “cloned” some of the gameplay elements most recently seen in Watch Dogs, and GTA in general has also taken inspiration from different open world games that were released over the years.
“Player Scanner”, a GTA Online novelty. Image source (because I’m too lazy to find a GTA Online session with cooperating players)
In a 2018 update, Rockstar brought a “Player Scanner” to GTA Online, which is reminiscent of the “Profiler” in Watch Dogs games, and in the same update, they also introduced weaponized drones that can be compared to the drone in WD2. More recently, GTA Online received a new radio station whose tracks are obtained from collectibles spread around the world – similar to how the media player track list can be expanded in WD1. I doubt that Watch Dogs was the primary motivation or inspiration for these mechanics, and they were hardly exclusive to Watch Dogs, but the point is that the “cloning” argument can go both ways.
Nowadays, when it comes to open world games, there’s hardly anyone “cloning” a particular game series. Watch Dogs games are GTA competitors, but the same can be said about countless other games, including many that don’t even make use of open world mechanics. None of this negates the fact that, despite not being a “GTA clone,” Watch Dogs ticks all the boxes of said unfortunately named category, for which a better name would totally be “open world games set in a place recognizable as the world we presently live in.” And therefore I won’t hide the fact that many of the comparisons I’ll make will be directly against the two “HD Universe” GTA titles, IV and V, as these are definitely the most well-known and successful games in said category.
I have played through all three games in the Watch Dogs series, on PC. I’m certain I spent more time than the average player in the first two games, having played through both twice, going for the completionist approach the first time I played both of them, and having spent more time than I’d like to admit in the multiplayer modes of WD1 and WD2. By “completionist approach,” I mean getting the progression meter to 100% in the first game, and going for all the collectibles spread around the map in WD2, in addition to completing all the missions. Why? Because, in general, I found their gameplay and virtual worlds enjoyable, regardless of their story or general “theme.”
While players and Ubisoft marketing tend to overly focus on the “hacking” aspect of the series, in my opinion its most distinctive aspect, compared to other open world games, is the fact that more than being a shooter, these can be open world puzzle games, requiring some thought when approaching missions, especially when opting for a stealthier approach. Mainly in the most recent two games, and to some degree in the first one too, there are typically multiple approaches to completing missions, catering to wildly different play styles. This extends even to their multiplayer aspects and adds to the replayability of the games. For example, I went for a mainly “guns blazing” approach on my first WD2 playthrough and settled with a “pacifist” approach when I revisited WD2 for a second time – which, in my opinion, is the superior way to get through the game’s story. But let’s not get ahead of ourselves.
Initially, I was going to write a single post with my thoughts about the three games. As I was writing some notes on what I wanted to say, I realized that a single post would be insufficient – even the individual posts per game are going to be exhaustively long. So I decided to write separate posts, in the order the games have been released, which is also the order I have played them. This post will be about the first Watch Dogs, and the next one will be about its sole major DLC, called Bad Blood.
My notes file for the whole series has over 200 bullet points, so hold on to your seats. Before we continue onto WD1, I just want to mention one more thing: I’m going to assume you have some passing familiarity with the three games, even if you have not played them yourself. I won’t be doing much more series exposition; I mostly want to vent about it, not write a recap. Still, I’ll try to give a bit of a presentation on each thing I’ll talk about, so that those who have played the games before can have a bit of a recap, and so that those who haven’t – but for some reason are still reading this – aren’t left completely in the dark.
Onto what is probably the lengthiest ever rant/analysis/retrospective of WD1. Enjoy!
“An appeal to celebrity is a fallacy that occurs when a source is claimed to be authoritative because of their popularity” [RationalWiki]
Today I was greeted by this Discord ping:What I want to talk about is only very tangentially related to what you see above, and the result of some shower thoughts I had after reading that. I did not watch the video, and I do not intend to, just like I haven’t watched most of DarkViperAU’s “speedrunner rambles” or most of his other opinion/reaction videos about a multitude of subjects and personalities. My following of these YouTube drama episodes hasn’t gone much beyond reading the titles of DVAU’s videos as they come up on my YouTube subscriptions feed. What I want to talk about is precisely why I don’t watch those videos and why I think that many talented “internet celebrities” or “content creators” would be better off not making them, and/or why the fans who admire them for their work alone would be better off ignoring that type of content.
…
OK, I was planning on writing a much longer post but I realized that my arguments would end up being read as “reaction videos and YouTube drama are bad and you’re a bad person if you like them”, which is really not the argument that I want to make here. Instead, let me cut straight to the chase:
Just because you admire someone’s work very much,
that doesn’t mean that you must admire its creators just as much,
nor that you should agree with everything they say
(nor that everything they say and do is right),
and the high quality of some of their work does not necessarily make them quality people nor makes all of their work high-quality.
This is one of those things that is really obvious in hindsight. Yet I often find it hard to detach works from their creator, and I believe this is the case for a majority of people, otherwise the “appeal to celebrity” fallacy would not be so common, and there wouldn’t be so many people interested in knowing what different celebrities have to say in areas that have nothing to do with what made them popular and successful in the first place.
This is not a “reaction/opinion pieces are bad” argument. If someone’s most successful endeavor is precisely to be an opinion maker, then I don’t see why they shouldn’t be cherished for that, and their work celebrated for its quality. But should you not like their work, you’re still allowed to like them as a person, and vice-versa.
DarkViperAU is an example of a “newfound internet celebrity” I admire for much of their work but who is progressively also veering off to a different type of content/work (of the “opinion making” type) which, if I were to pay attention to it, could greatly reduce my enjoyment of the parts of his content that I find great. For me, the subject of today’s ping on his Discord was a great reminder of that, and sent me off in a bit of a shower thought journey.
While I am not fond of end-of-year retrospectives – calendar conventions do not necessarily align with personal milestones – 2020 was definitely the most awkward year in recent times for a majority of the world population. It was an especially awkward year for me, as among many other things, it was when I fell into what I’d describe as an “appeal to a celebrity’s work” fallacy. I initially believed I’d really like to work with people who make a project I admire very much, but over the months I found some of their methods and personalities to really conflict with my personal beliefs, and yet, I kept giving my involvement second chances, because I really felt like the project could use my contribution.
In the end, there’s no problem in liking an art piece exclusively because of its external appearance, even if you are not a fan of the materials nor of some of its authors. And if you think you can improve on that piece of art, expect some resistance from the authors, keeping in mind it might fall apart as you attempt to work on it. Sometimes making your own thing from scratch is really the better option: you might be called an imitator and the end result may even fall short of your own expectations, but you’ll rest easy knowing that you have no one but yourself to blame.
On a more onward-looking note, I wish you all the best for the years to come after 2020. I have a new Discord server which, unlike the UnderLX one, is English-speaking and not tied to any specific project or subject. My hope is to get in there those who I generally like to talk to and work with, so we can all have a great time – you know, the typical thing for a generalist Discord server. I know this is an ambitious goal for just yet another one of these servers, but that won’t stop me from trying. My dear readers are all invited to join Light After Dinner.
Many people have been experiencing strange time perception phenomenon throughout 2020, but certain database management systems have been into time shenanigans for way longer. This came to my attention when a friend received the following exception in one of his projects (his popular Discord bot, Accord), coming from the MySQL connector being used with EF Core:
MySqlException: Incorrect TIME value: '960:00:00.000000'
Not being too experienced with MySQL, as I prefer PostgreSQL for reasons that will soon become self-evident, for a brief moment I assumed the incorrection in this value was the hundreds of hours, as one could reasonably assume that maybe TIME values were capped at 24 hours, or that a different syntax was needed for values spanning multiple days, and that one would need to use, say, “40:00:00:00” to represent 40 days. But reality turned out to be more complex and harder to explain.
With checking the documentation being the most natural next step, the MySQL documentation goes:
MySQL retrieves and displays TIME values in 'hh:mm:ss' format (or 'hhh:mm:ss' format for large hours values).
So far so good, our problematic TIME value respects this format, but the fact that hh and hhh are explicitly pointed out is already suspect (what about values with over 999 hours?). The next sentence in the documentation explains why, and left me with even more questions of the WTF kind:
TIME values may range from '-838:59:59' to '838:59:59'.
Oooh Kaaay… that’s an oddly specific range, but I’m sure there has to be a technical reason for it. 839 hours is 34.958(3) days, and the whole range spans exactly 6040798 seconds. The documentation also mentions the following:
MySQL recognizes TIME values in several formats, some of which can include a trailing fractional seconds part in up to microseconds (6 digits) precision.
Therefore, it also makes sense to point out that the whole interval spans 6 040 798 000 000 microseconds, but again, these seem like oddly specific numbers. They are not near any power of two, the latter being between 242 and 243, so MySQL must be using some awkward internal representation format. But before we dive into that, let me just point out how bad this type is. It is the closest MySQL has to a time interval type, and yet it can’t deal with intervals that are just a bit over a month long. How much is that “bit”? Not even a nice, rounded number of days, it seems.
To make matters worse, it appears that the most popular EF Core MySQL provider maps .NET’s TimeSpan to TIME by default, despite the fact that TimeSpan can contain intervals in the dozens of millennia (it uses a 64 bit integer and has 10-8 s precision) compared to TIME’s measly “a bit over two months”. This is an issue other people have run into, and the discussion in that issue includes a “This mimics the behavior of SQL Server” remark, which made me go check and, sure enough, SQL Server’s time is meant to encode a time of day and has a range of 00:00:00.0000000 through 23:59:59.9999999, something which overall makes more sense to me than MySQL’s odd TIME range.
So let’s go back to MySQL. What is the reasoning behind such an interesting range? The MySQL Internals Manual says that the storage for the TIME type has changed with version 5.6.4, having gained support for fractional seconds in this version. It uses 3 bytes for the non-fractional type. Now, had they just used these 3 bytes to encode a number of seconds, they would have been able to support intervals spanning over 2330 hours, which would already be a considerable improvement over the current 838 hours maximum, even if still a bit useless when it comes to mapping a TimeSpan to it.
This means their encoding must be wasting bits, probably so it is easier to work with… not sure in what circumstances exactly, but maybe it makes more sense if your database management system (and/or your conception of what the users will do with it) just loves strings, and you really want to speed up the hh:mm:ss representation. So, behold:
1 bit sign (1= non-negative, 0= negative)
1 bit unused (reserved for future extensions)
10 bits hour (0-838)
6 bits minute (0-59)
6 bits second (0-59)
---------------------
24 bits = 3 bytes
This explains everything, right? Well, look closely. 10 bits for the hour… and a range of 0 to 838. I kindly remind you that 210 is 1024, not 838. The plot thickens. I’m not the first person to wonder about this, of course, this was asked on StackOverflow before. The accepted answer in that question explains everything, but it almost didn’t, as it initially dismisses the odd choice of 838 as “backward compatibility with applications that were written a while ago”, and only later it is explained that this choice had to do with compatibility with MySQL version… 3, from the times when, you know, Windows 98 was a fresh operating system and Linux wasn’t 10 years old yet.
In MySQL 3, the TIME type used 3 bytes as well, but they were used differently. One of the bits was used for the sign as well, but the remaining 23 bits were an integer value produced like this: Hours × 10000 + Minutes × 100 + Seconds; in other words, the two least significant decimal digits of the number contained the seconds, the next two contained the minutes, and the remaining ones contained the hours. 223 is 83888608, i.e. 838:86:08, therefore, the maximum valid time in this format is 838:59:59. This format is even less wieldy than the current one, requiring multiplication and division to do basically anything with it, except string formatting and parsing – once again showing that MySQL places too much value on string IO and not so much on having types that are convenient for internal operations and non-string-based protocols.
MySQL developers had ample opportunities to fix this type, or at the very least introduce an alternative one that is free of this reduced range. They changed this type twice from MySQL 3 until now, but decided to retain the range every time, supposedly for compatibility reasons. I am struggling to imagine the circumstances where increasing the value range for a type can break compatibility with an application – do types in MySQL have defined overflow behaviors? Is any sane person writing applications where they are relying on a database type’s intrinsic limits for validation? If yes, who looked at this awkward 838 hours range and thought of it as an appropriate limitation to carry unchanged into their application’s data model? At this point, I don’t even want to know.
Despite having changed twice throughout MySQL’s lifetime, the TIME type is still quite an awkward and limited one. That unused, “reserved for future extensions” bit is, in my opinion, really the pièce de résistance here. Here’s hoping that one day it will be used to signify a “legacy” TIME value and that, by then, MySQL and/or MariaDB will have support for a proper type like PostgreSQL’s INTERVAL, which has a range of +/- 178000000 years and a very reasonable microsecond precision.
This quasi-abandoned blog notably has a “Music I like” page, which has not been updated… since four years ago. Not that anyone cares, of course. The reasons why I stopped updating it definitely include the previous sentence, but that could apply to the entirety of this website; in the case of that page particularly, there is a more specific reason. Due to contractual changes with my communication services provider, I had to stop using their streaming service, which was the one I had used the most up until that point, and which provided 10 free track downloads per month. By the way, said streaming service was discontinued later in February 2018 – a move which certainly had nothing to do with the fact that my departure in 2016 allegedly brought them from 6 to 5 monthly active users.
This meant that I no longer had a reason to necessarily find at least 10 tracks to download every month, and the rhythm at which I processed new tracks into my library became even less regular. Checking my library now, it seems I went 6 months without processing new music into my library; it’s entirely possible I found mediocre SoundCloud music sufficient for that period. Eventually, legitimate replacement music sources were found, and my library would continue to grow, now having over 400 additional tracks compared to when that page, which does not contain the entirety of my library, stopped being updated. That’s an average of less than 9 new tracks per month, which means I’m adding less music now than when I had the minimum monthly goal of 10.
I could dump a massive update on the “Music I like page”, to inefficiently inform the world about these 🔥 absolute bangers 🔥👌👌, but I decided there is little point to an endless list of mediocre EDM, house, electronic synth-/indie-/progressive-pop singular tracks. Realistically, it wouldn’t provide any benefit over you just finding some fine examples of these genres with the help of YouTube’s and Spoitify’s recommendation engines, unless you craved the “Music I like” lists specifically because of the more obscure tracks I found, in which case you are just a creepy weirdo.
I realize it’s about time I move towards sharing my musical taste over more widely accepted methods, such as Spotify playlists; the reason why I’m yet to do so is that I often find my music elsewhere, and I don’t feel like manually adding my 1000+ track library to Spotify, searching track by track. Yes, as a programmer I also realize there are probably tools to help with this. Yes, as a programmer I’m also too lazy to bother. Instead of mentioning individual tracks without commentary, I’m going to talk about, review if you will, the albums which I’ve found to enjoy quite a lot over the last couple of years. And by “albums I enjoy”, I mean albums I like to listen to, from beginning to end, without “unfavorite” tracks.
Let’s start with Good Faith, the album released in November 2019 by Madeon. Wikipedia tells me the genre of this album is supposed to be French House, and I’m like, yeah whatever, because this doesn’t quite sound like house to me and it also doesn’t necessarily sound French. This album has a very different style from the tracks I knew from his previous album, Adventure – “You’re On”, “Pay No Mind”, “Finale” and “The City” – to the point where if it didn’t say Madeon on the cover, I’d probably assume it was from a different artist. I’m totally fine with this change, same artist or not, especially because this latest album apparently aligns more with my current taste; I definitely had “unfavorite” tracks in Adventure, while Good Faith is definitely one I listen to from beginning to end.
To stress the fact that I’m going through albums in no particular order, I’ll now talk about an album released earlier, in March 2019: Together, by Third Party. This one is much easier to classify: it clearly is a progressive house album. The melodies are great, vocals on the tracks that have them are nice, the lyrics are acceptable – keep in mind that’s about as much praise or criticism any lyrics are going to get from me, after all, I barely pay attention to them and I find anything fine as long as it doesn’t outright promote human rights violations. I suppose the really noteworthy thing is that I enjoy all nine tracks of it, something which I can’t say about most albums produced by progressive house DJs, which– hold on, albums? Exactly, barely anyone in this genre still bothers releasing cohesive albums, and if an album does get made, there’s a high chance it won’t be more than a collection of the artist’s tracks since the last album. I suppose I like this one because it is a proper album, and the tracks are not only individually enjoyable, they also flow well into each other.
Let’s now go quite a bit more into mainstream land, and by “mainstream” I don’t want to imply I’m some sort of hipster and that Madeon and Third Party are exquisite, obscure artists. What I mean is, “artists that play in top-50 radios around the world”, such as Dua Lipa. Future Nostalgia is her second album, released in March of this wonderful, blessed year, at least as far as critical acclaim for Dua Lipa albums is concerned (Metacritic score of 88/100). This album has very good tracks from start to end, and overall the title describes the genre of the album perfectly: it’s an album full of late 70s, 80s, early 90s hits, but produced in the future. Not the future with flying cars people used to dream about, but a future with exponentially exciting natural developments, and where I have fiber in this neck of the woods; the jury is still out on whether this was a good trade-off, flying cars could have paved the way for innovative drive-in (fly-in?) supermarkets, and would have an infinitely higher breads-per-second throughput than fiber optics, but I digress. If you never listened to Future Nostalgia beyond the “Physical” and “Don’t Start Now” singles brought to you by your advertiser-friendly neighborhood top-50 radio, you are missing out on many other enjoyable songs.
Speaking of artists which play in top-50 radios, let’s talk about The Weeknd and his latest After Hours album, released in March this same wonderful year, as far as critical reception of The Weeknd albums is concerned (Metacritic score of 80/100). Not as wonderful, because 80 is less than Dua Lipa’s 88. And rightfully so, because unlike the other albums I’ve mentioned, this is one I must give a hard pass, so much so that I’m bringing it here just to do that.
After Hours has two extremely well produced, extremely successful synthwave/synth-pop hits: “Blinding Lights” and “In Your Eyes”. Aaaand that’s about it as far as my taste is concerned. I gave a quick listen to the rest of the album: too much R&B for my taste, too little synth-pop. I would even go as far as to say that those tracks don’t quite fit in the album, because their style feels so distant from the rest of the tracks. And as much as I love “Blinding Lights”, it has been so overplayed and overused that it is starting to suffer from “Get Lucky” syndrome – remember how some years ago we got too much of that single Daft Punk track while almost nobody cared about the rest of their excellent Random Access Memories album? I remember, and “Blinding Lights” is slowly getting to that point.
I feel like I also have an obligation to leave this video here:
Leaving top 50 behind, I’m going to make one final recommendation. (Wait. Is this supposed to be a post with music recommendations? Recommendations to myself, I guess.) Released August this year, less than a month ago, BRONSON is the name of the debut album of the collaborative project of the same name, between ODESZA and Golden Features. It has one more capital letter than ODESZA, so that means it must be better. (Unfortunately, it seems some of their “””fans””” didn’t like the new sounds as much, and did some artist harassment. I don’t even.) For my fellow uncultured gamers, ODEZSA has a couple tracks in Forza Horizon 4’s radios and “A Moment Apart” plays in the game’s intro and in the menus.
But back to BRONSON. The album is great, even if I wouldn’t mind if it had a couple more tracks. Quality over quantity, I guess. Much like many of the tracks from ODESZA, it’s hard to define their genre beyond something generic like “electronic”. BRONSON is an interesting case of an album I enjoy almost exclusively as a whole. Many of the tracks aren’t tracks I would listen to on their own. But when played from start to end, I really enjoy it, even the parts that wouldn’t normally fit my taste. And I think that’s really the best way to appreciate and evaluate this album, from start to end, without interruptions. Each track transitions seamlessly to the next, to the point where the gap between tracks introduced by some players becomes quite annoying. I’m really glad I didn’t listen to the singles from BRONSON as they were being released, as I’d probably have ignored the album if I did. The last track features Totally Enormous Extinct Dinosaurs, an artist to which I haven’t paid attention in ages, and which I really need to take some time to yay-or-nay one of these days (I really enjoyed a couple of his tracks some years ago, notably one that was featured in a Nokia commercial for a Windows Phone – that’s how long ago that was).
I suppose this ends my music reviewer roleplay, and I can now go back to enjoying my generic house tracks as recommended by my Discover Weekly playlist on Spotify. There are a few more albums worth mentioning in my library, but I’ll save those for another time. Maybe the destiny of this blog is indeed to go from cosplaying as a music blog in certain pages, to actually becoming one. It’s not like I feel like talking about work, anyway – and who knows what kind of trouble I’d get in with the HR department(s) if I did. My CV hopefully speaks for itself, and this lousy blog adds nothing anyway, even if I added some spectacular “technical posts”.
This GitHub repo, created just 5 hours before this post, shot to the top of Hacker News quite fast (see the thread). Its content is a readme containing a demonstration of the limitations of current artificial intelligence applications, specifically, the algorithms employed in Amazon product search, Google image search and Bing image search, by showing that searching for “shirt without stripes” does, in fact, bring up shirts, both with stripes and without.
At the time of writing, the brief but clever document can be seen as a mocking criticism of these systems, as it links to the pages where the three companies boast about their “broadest and deepest set” of “cutting-edge” “responsible” AI. I took these words from their pages, and of course you can’t tell which came from where, adding to the fun.
Some of the comment threads on the Hacker News submission caught my attention. For example, this comment thread points out the possible discrimination or bias apparently present in those systems, as doing a Google image search for “person” showed mostly white men to that user. This other thread discusses whether we should even apply natural language processing to a search query. In my opinion, both threads boil down to the same problem: how to manage user expectations about a computer system.
Tools like Google and Bing have been with us for so long, and have been improved to such a point, that even people who work in IT, and have a comparatively deep understanding of how they work, often forget how much of a hack they really are. We forget that, in many ways, Google is just an extremely advanced, web-scale successor to good old grep. And we end up wondering if there is ethnic or gender discrimination in our search results, which there probably is, but not because Google’s cyborgs are attracted to white men.
When web search was less perfect, when you needed to tinker with your search query multiple times to even get close to the results you wanted, it was very easy to see how imperfect those systems are, and we adjusted our expectations accordingly. Now, we still need to adjust our queries – perhaps even more often than before, as some Hacker News commenters have suggested – but the systems are much fuzzier, and what ends up working feels more random to us humans than it once did. Perhaps more interestingly, more users now believe that when we ask Google for something, it intrinsically understands the concepts of what we mentioned. While work is certainly being done to ensure that is the case, it is debatable whether that will even lead to better search results, and it’s also debatable whether those results should be “unbiased”.
Often, the best results are the biased ones. If you ask your AI “personal assistant” for the weather, you expect the answer to be biased… towards your current location. This is why Google et al. create a “bubble” for their users. It makes sense, it’s desirable even, that contextual information is taken into account as an additional, invisible argument to each search. Programmers are looking for something very specific when they search the web on how to kill children.
Of course, this only makes the “shirt without stripes” example more ridiculous: all the information on whether to include striped shirts in the results is right there, in the query! It does not need any sort of context or user profiling to answer this search query “correctly”, leading to the impression that indeed these systems should be better at processing our natural language… to the detriment of people who like to treat Google as if it was grep and who would use something more akin to “shirt -stripes”, which, by the way, does a very good job at not returning shirts with stripes, at least on a private browsing window opened by me!
Google image search results for “shirt -stripes”. I only see models with skin colors on the lighter side… hmm 🤔
Yes, I used “shirt -stripes” because I “read the documentation” and I am aware of the limitations of this particular system. Of course, I’m sure Google is working towards overcoming these limitations. But in the meantime, what they offer is an imperfect system that seems to understand our language sometimes – just go ahead and Google something like “what is the president of France” – but fails unexpectedly at other times.
And this is the current state of lots of user interfaces, “artificial intelligence”, and computer systems in general. They work well enough to mask their limitations, we have grown accustomed to their quirks, and in their masking, we ultimately perceive them to be better than they actually are. Creating “personal assistants” which are ultimately just a glorified front-end to web search has certainly helped us perceive these tools as more advanced than they actually are, at least until you actually try to use them for anything moderately complex, of course.
Their fakery is just good enough to be a limitation, a vulnerability: we end up thinking that something more is going on, that these systems conspire in their quirks, that some hidden agenda is being pushed by having Trump or Gretta appear more often in search results. Most likely, there are way more pictures of white people on the internet than of any other skin color1, and I’m also sure that Greta has a better modern-day-equivalent-of-PageRank than me or even the Portuguese president, even taking into account the frequency with which he makes headlines. Blame that on our society, not the tools we use to navigate this mess we created.
1 Quite the bold statement to leave without a reference, I know. Of course, it boils down to a gut feeling, a bias, if you will. Whether computer systems somehow are more human by also being biased, is something that could be discussed… I guess I’ll try to explore this in a future blog post I’ll never get to actually write.
Many are aware that some YouTubers are unhappy with how YouTube operates. But are you aware that Android app developers go through similar struggles with Google Play? Let me try and explain everything that’s wrong with Android in a single 20 minutes read.
Android was once considered the better choice of mobile platform for those looking for customizability, powerful features such as true multitasking, support for less common use cases, and higher developer freedom. It was the platform of choice in research and education, because not only are the development tools free and cross-platform, Android was also a very flexible operating system that did not get in the way of experimenting with innovative concepts or messing with the hardware we own. This is changing at an increasingly faster pace.
While major new Android versions used to bring features that got both users and developers excited, since a few versions ago, I dread the moment a new Android version is announced and I find myself looking for courage (heh) to look at the changelogs and developer guidelines for it. And new Android versions are not the only things that make my heart beat faster for the wrong reasons: changes to Google Play Store policies are always a fun moment, too.
Before we dive in any further, a bit of context: Android was not the first mobile OS I used; references to my experiences and experiments with Windows Mobile 6.x are probably scattered around this blog. I started using Android at a time when 4.2 was the latest version, I remember 4.4 being announced shortly after, and that was the version my first Android phone ran until the end of its useful life. Android was the first, and so far only, mobile operating system for which I got seriously invested in app development.
I started messing with Android app development shortly before 6.0 Marshmallow was released, so I am definitely not an old timer who can say he has seen Android evolve from the beginning, and certainly not from the perspective of a developer. Still, I feel like I have witnessed a decade of changes – in big part, because even during my “Windows Mobile experiments” era, I was paying attention to what was happening on the Android side, with phones I couldn’t yet afford to buy (my Windows Mobile “Pocket PCs” were hand-me-downs). I am fully aware of how bad Android was for both users and developers in the 4.x and earlier eras, in part because I still had the opportunity to use these versions, and in part because my apps had to support some of them.
API deprecation and loss of backwards compatibility
With every Android version, Google makes changes to the Android APIs. These APIs are how apps interact with the operating system, and simplifying things a bit, they pretty much define what apps can and can’t do. On top of this, some APIs require permissions, which you agree to when you install apps that use them, and some of these permissions can be allowed or denied by the user as he runs the app (of course, the app can refuse to run if the permissions are denied, but the idea is that it will degrade gracefully and provide at least some functionality without them). This is the case for the APIs that access your contact list or your location.
New Android versions include new APIs and, in the past, barely any changes were made to APIs introduced in previous versions. This meant that applications designed with an older version in mind would still work fine, and developers did not need to immediately redesign their apps with new versions in mind.
In the past two to three years, new Android versions have also began removing APIs and changing how the existing ones work. For example, applications wishing to stay active in the background now have to display a permanent notification, an idea which sounds good in theory, but the end result is having a handful of permanent notifications in your drawer, one for each application that may need to stay active. For example, I have two in my phone: one for the call recorder, and another for the equalizer system. One of my own apps also needs to have a similar notification in Android 8/Oreo and newer, in order to reliably perform Wi-Fi scans to locate the user in specific locations.
In the upcoming Android version 10/Q, Google intends to restrict even more what apps can do. They are removing the ability for apps to access the clipboard, killing an entire category of clipboard management apps (so that you can have a history of what you copied, so that you can sync the clipboard with your other phones and computers, etc.). Currently, all apps can access the clipboard without special permissions, but the correct way to solve this is to add a permission prompt, not to get rid of the API entirely. Applications can no longer turn the Wi-Fi on or off, which prevents automation apps from e.g. turning off the Wi-Fi when you’re driving. They are thinking of entirely preventing apps from accessing arbitrary files in “external storage” (SD cards and the area of internal memory on your phone where screenshots and camera pictures go, and where you put your MP3s, game ROMs for emulation, etc.).
Note that all of these things that they are removing for “security”, could simply be gated around a permission prompt you’d have to accept, as with the contact list, or location. Instead, they decided to remove the abilities entirely – even if users want these features, apps won’t be able to implement them. Existing apps will probably be review-bombed by users who don’t understand why things no longer work after updating to the shiny new Android version.
These changes to existing APIs mean more for users and developers. Applications that worked fine until now may stop working. Developers will need to update their apps to reflect this, implement less user-friendly workarounds, explanation messages, and so on. This takes time, effort, money etc. which would be better spent actually fixing other issues of the apps, or developing new features. For small teams or solo developers, especially those doing app development as a hobby or as a second job, catching up with Google’s latest “trends” can be insurmountable. For example, the change to disallow background services meant that I spent most of my free time during one summer redesigning the architecture of one of my apps, which in turn introduced new bugs, which had to be diagnosed, corrected, etc., and, in the end, said app still needs to show a notification to work properly in recent Android versions.
There are other ways Google can effectively deprecate APIs and thus limit what applications can do, without releasing new Android versions or having to update phones to them. Google can decide that apps that require certain permissions will no longer be allowed on the Play Store. Most notably, Google recently disallowed the SMS and Call Log permissions, which means that apps that look at the user’s call log or messaging history will no longer be allowed on the store.
Apps using these permissions can still be installed by downloading their APKs directly or by using alternative app stores, but they will no longer be allowed on the Play Store. This effectively means that for many apps, the version on the Play Store no longer contains important functionality. For example, call recorders are no longer able to associate numbers with the recordings, and automation apps can no longer use SMS messages as a trigger for actions. Because Google Play is where 99% of people get their apps, this effectively means functionality requiring these permissions is now disallowed, and won’t be available except to a extremely small minority of users who know how to work around these limitations.
The Google Play Store is the YouTube of app developers
Being on the Play Store is starting to feel much like producing content for YouTube, where policy changes can be sudden and announced without much time in advance. On YouTube, producers always have to be on the lookout for what might get a video demonetized, on top of dealing with content claims, both actions promoted by entirely automated, opaque systems. On the Play Store, we need to be constantly looking out for other things that might suddenly get our app pulled or our developer account banned – together with the accounts of everyone who Google decides has anything to do with us:
Your app can be pulled for “deceptive ads” (this includes, incredibly, linking to your other apps. Also, did you know that asking for donations is prohibited unless you do so through in-app purchases?);
And this is just a tiny sample, not even the “best of”, of the horrifying stories that are posted to r/androiddev, every other day. For each of these, there are dozens in the respective “categories”. Sometimes the same stories, or similar ones, also make the rounds in Hacker News. It seems Google is treating Play Store bans and app removals with the same or worse flippancy that online games ban players suspected of cheating. Playing online games isn’t the career of most people who do it, but Android app development is, which leads to the obvious question, what do people do when they are banned?
After writing this, I realize my YouTube analogy is terrible. You see, on YouTube generally one receives strikes, instead of waking up one day to suddenly see their account banned. YouTubers also have the opportunity to profit from the drama caused by the policy changes by “reacting” to them, for example. And while YouTubers typically have the sympathy of their viewers, app developers have to deal with user outrage – because users have no idea, or don’t care, about why we’re being forced to massively degrade the performance and features of our apps. For example, the developer of ACR, a popular call recorder, had to deal with bad app reviews, abuse and profanity among thousands of emails from outraged users after removing the call log permission, and this was after an extensive campaign warning users of the upcoming changes (as a user of ACR, I uninstalled the Play Store version and installed the “unchained” version, which keeps the call log features, through XDA Labs).
As a freelance developer or as a small company, developing for Android is riskier than ever. I can start working on an app idea today and it’s possible that in six months, when it is ready for the initial release, changes to the store policy will have rendered my app unpublishable or have severely affected its functionality… in addition to the aforementioned point about APIs deprecating and changing semantics, requiring constant upkeep of the code to keep up with the latest versions.
If you opened the links above, by now you have probably realized another thing: user support with actual humans is non-existent, and if only their bots were as responsive as Google Assistant… And, if they are not bots, then they are humans which only spit out canned responses, which is just as bad. It is widely known that the best method for getting problems solved with regards to Google Play listings, is to catch the attention of a Google employee on social media.
It seems the level of support Google gives you is correlated to how many people will read your rants about your problems with their platforms. And it’s an exponential correlation, because being big isn’t enough to get a moderate level of support; you must be giant. This is a recurring problem with most Google services, especially if you are not using G Suite (apparently, app developers do not count as “paying customers” when it comes to support). Of all the things I’d like the EU to regulate (and especially, to not regulate, but that’s a story for a different time), the obligation for these mega-corporations to provide actual user support is definitely one of them.
Going back to the probably flawed YouTube analogy, there’s one more parallel to draw: many people believe that in recent years, YouTube has been making changes to both policies, business models and the “algorithm”, that heavily favor the big, already-established creators and make it hard for smaller ones to ever be successful. I believe we are seeing a similar trend on the Google Play Store – just keep in mind you must not analyze an app’s popularity or “level of establishment” by the number of downloads or active users, but by how much profit it generates in ad revenue and IAP cuts.
“Android is open source”
“Android is open source” is the joke of the year – for the fifth consecutive year. While it is true that the Android Open Source Project (AOSP) is still a thing, many of the components that make Android recognizable and usable, both from an end user and developer’s perspective, are increasingly closed source.
Apps made by Google are able to do things third-party apps have trouble replicating, no doubt due to the tight-knit interaction between them and the proprietary behemoth that is Google Play Services. This is especially noticeable in the “Google” app itself, Google Assistant, and the Google launcher.
If you install an AOSP build, many things will be missing and many apps – my own ones included – will have trouble running. Projects looking to provide “de-googlified” versions of Android have developed extensive open source replacements for many of the functions provided by Google Play Services. The fact that these replacements had to be community-developed, and the fact that they are very much necessary to run the majority of the popular applications, show that nowadays, Android can be considered open source as much as in the sense that it can be considered a Linux distro.
AOSP, on its own, is effectively controlled by Google. The existence of AOSP is important, if nothing else, to define common APIs that the different “OEM flavors” of Android must support – ensuring, with minor caveats, that we can develop for Android and not for “Samsung’s Android” or “Nokia’s Android”. But what APIs come and what APIs go is completely decided by Google, and the same is true for the overall system architecture, security model, etc. This means Google can bend AOSP to their will, stripe it of features and move things into proprietary components as much as they want.
Speaking of OEMs and inter-device compatibility, it’s obvious that this push towards implementing important functionality in Google Play Services and making the whole operating system operate around Google’s components has to do with keeping the “OEM flavors” under control. A positive effect for users and developers is that features and security patches become available even on devices that don’t receive OEM updates, or only receive updates for the major Android version they came with, and therefore would never receive the new features in the latest major release. A negative effect is that said changes can affect even old Android versions overnight and completely at Google’s discretion, much like restrictions on what APIs and permissions apps on the Play Store are allowed to use.
Google’s guiding light when it comes to Android openness seems to gravitate towards only opening the Android source as much as necessary for OEMs to make it run on their devices. We are not at that extreme point – mainly because the biggest OEMs have enough leverage to prevent that from happening. I feel that at this point, if Google were able to make Android entirely closed source, they would do it. I wonder what future Fuschia holds for us in this regard.
So secure you can’t use it
The justifications for many of the changes in later Android versions and Google Play policies usually fall into one of two types: “security” and “user experience”, with the latter including “battery life”. I’m not sure for whom Google is designing their “user experience” in recent years, but it certainly isn’t for “proficient users” like me. Let’s, however, talk about security first.
Security should be proportionally strong to what it is protecting. With each major Android version, we see a bigger focus on security; for example, it’s becoming harder and harder to root a phone, short of installing a custom ROM that includes superuser functionality from the start. One might argue this is desirable, but then you notice security and privacy have also been used as the excuse to disallow the use of certain permissions like the call log and messaging access, or to remove APIs including the external storage one.
This increase in security strength makes sense: security is now stronger because we are also storing more valuable information in our phones, from “old-fashioned” personal information about us and our acquaintances, to biometric information like fingerprint, facial and retinal scans. Of course, and this is probably the part Google et al. are most worried about, we’re also storing entire payment systems, the keys for DRM castles, and so on.
Before finishing my point about security, let’s talk a bit about user experience. User experience is another popular excuse for making changes while limiting or altogether removing certain features. If something has to be particularly complicated (or even “insecure”) in order to support the use cases of 1% of the users, it often gets simplified… while the “particularly complicated” or “insecure” system is stripped entirely, leaving the aforementioned 1% with a system that no longer supports their use cases. This doesn’t sound too bad, right? However, if you repeat the process enough times, as Google is bound to do in order to keep releasing new versions of their software (so that their employees can get their bonuses), tying the hands of 1% of the users at a time, you are probably going to be left with something that lets you watch ads only… and probably Google ads at that, I guess. You didn’t need to make phone calls, right? After all, the person on the other side might be pulling a social engineering scheme on you, or something…
Strong security and good user experience are hard to combine together. It seems that permission prompts do not provide sufficient security nor acceptable user experience, because apparently it’s easier to remove the permissions altogether than to let users have a choice.
User choice is what all of this boils down to, really. Android used to give me the choice of being slightly insecure in exchange for having more powerful and innovative features in the apps I install, than in the competing mobile platforms. It used to give me the choice of running 10 apps in the background and having my battery last half a day as a result, but now, if I want to do so, I must deal with 10 ongoing notifications. I used to be able to share files among apps as I do on my desktop, but apparently that is an affront to good security too. I used to be able to log the Wi-Fi networks in my vicinity every minute, but in Android 9 even that was limited to a handful of scans per hour, killing some legitimate use cases including my master’s thesis project in the process. Fortunately, in academia we can just pretend the latest Android version is 8.
Smart cards, including SIM cards, were invented to containerize the secure portion of systems. Authentication, attestation, all that was meant to be done there, such that the bigger system could be less secure and more flexible. Some time in the last two decades, multiple entities decided it was best (maybe it provided “better user experience”?) that important security operations be moved into the application processor, including entire contactless payment systems. Things like SafetyNet were created. My argument in this section goes way beyond rooting, but if my phone is rooted and one of the apps to which I granted root permission steals my banking details, … apparently the banking app shouldn’t have been allowed to run in the first place? Imagine if the online banking of my bank refused to open on my desktop because it knows I know the password for the administrator account.
Still on the topic of security, by limiting what apps distributed on the Play Store are allowed to do and ending support for legitimate use cases, Google ends up encouraging side-loading (direct APK download and installation). This is undesirable from a security point of view, and I don’t think I need to explain why.
Our phones are definitely more secure now, but so much “security” is crippling the use cases of people who do more than binge-watch YouTube and their social network feeds. We should also keep in mind that many people are growing up with smartphones and tablets alone, and “just use your desktop for those advanced tasks” is therefore not an answer. It’s time for my retarded proposal of the week, but what about not storing so much security-sensitive stuff in our phones, so that we don’t need so much security, and thus can actually get back the flexibility and “security pitfalls” we had before? Android, please let me shoot myself in the foot like you used to.
Lack of realistic alternatives
This evolution of Android towards appealing to the masses (or appealing to Google’s definition of what the general public should be allowed to do) would not worry me so much if, as a user, I had a viable mobile OS alternative. On the Apple side, we have iOS, whose appeal from the start was to provide a “it just works”, secure platform, with limited flexibility but equally limited margin for error. Such a platform is actually a godsend for many people, who certainly make up the majority of users, I don’t doubt. Such a platform doesn’t work for me, because as I said, I need to be able to shoot myself in the foot if I want to: let me have 2 hours of battery life if I want, let my own apps spy on my location if I want.
This was fine for many years, because we had Android, which let us do this kind of stuff. It just so happens that because of AOSP, and because there were no other open source or licensable platforms with traction, Android ended up being the de-facto standard for every smartphone that isn’t an Apple one. On the low-end, Android is effectively the only option. Of course, this led to Android having the larger market share. Since “everyone” uses it now, there’s pressure to copy the iOS model of “it just works” and “safe for people with self-harm tendencies” – you can’t hurt yourself even if you wanted.
Efforts to introduce an Android competitor have been laughable, at best. Windows Phone/Windows Mobile failed in part because of a weak and possibly too late entry, combined with a dubious “vision” and bad management decisions on Microsoft’s part. In the end, what Microsoft had was actually good – if there weren’t the case, there wouldn’t be still plenty of die-hard WP/WM fans – but getting there so late (and with so many mixed signals about the future of the platform) means developers were never sufficiently captivated, and without the top 100 apps in there, users won’t find the platform any good, no matter how excellent it is from a technical standpoint. Obviously, it does not help that a significant number of those “top 100 apps” are Google properties; in fact, the only reason Google has their apps on iOS is because, well, iOS was there already when they arrived on the scene.
If even a big player with stupid deep pockets like Microsoft can’t introduce a third mobile platform, the result of smaller-scale attempts like Firefox OS is quite predictable. These smaller attempts have an additional problem, which is finding hardware to run on. It doesn’t help that you can’t change the OS on a phone the same way you can on a PC. In fact, in the long gone year of 2015, I was already ranting about the lack of standardization in smartphone hardware. It’s actually fun to go back at that post, made when Android 4.4 was the latest version, and see how my perception of Android has changed.
I should also note that if a successful Android alternative appears, it will definitely run Android apps, probably through a compatibility layer. In a way, Android set the standard for apps much in the same way that 15 years ago, IE6 was setting web standards in the worst way possible. Did someone say antitrust?
Final thoughts
Android, and therefore Google, set the standard – and the implementation – for what we can and can’t do with a smartphone, except when Apple introduces a major innovation that OEMs and Google are compelled to quickly implement in Android. These days, it seems Apple is stalling a bit in innovation in the smartphone front, so Google is taking the opportunity to “innovate” by making Android more similar to iOS, turning it into a cushioned, limited, kid-safe operating system that ties the hands of developers and proficient users.
Simultaneously, Google is solving the problem of excessive shovelware and even a bit of malware on the Play Store, by adding more automation, being even less open about their actions, and being as deaf as ever. Because it’s hard to understand whether apps are using certain permissions legitimately or not – and because no user shall be trusted to decide that by themselves – useful applications, from call recording tools, to automation, to literally any app that might want to open arbitrary files in the user storage, are being “made impossible” by the deprecation and removal of said permissions and APIs.
We desperately need an Android alternative, but the question of who will develop, use and target said alternative remains unanswered. What I know, is that I no longer feel happy as an Android developer, I no longer feel happy as an Android user, and I’m not likely at all to recommend Android to my friends and family.
Edited at 2:56 March 28th UTC to add clarification about Android clipboard access.