GTA VI will be a disappointing masterpiece

How do you make a sequel to the highest-grossing entertainment product of all time, which is also the second most sold video game after Minecraft? A yet unreleased sequel so impactful in online culture, that its trailers break YouTube viewership records and get recreated by fans, not just in dozens of other games, but also in live action, with the involvement of brands and personalities which would normally be disconnected from the games industry. A sequel that, years away from its release, already had multiple podcasts dedicated specifically to its discussion; a successor which is almost guaranteed to break sales records yet again, even if it were to somehow get a worse critical reception than its predecessors.

This is the question that thousands of people over at Rockstar Games have been working to answer for over a decade; an answer which millions of fans have been waiting to examine for nearly as long, while desperately trying to decipher every hint of Rockstar’s progress on Grand Theft Auto VI. It would be extremely unlikely for the final product not to be critically acclaimed, but it being unanimously perceived as nearly perfect would be equally improbable: no piece of media with such a large and diverse audience can possibly fully align with the preferences of every single fan. When the dust settles, every single one of us shall have remarks to note in the scorecard, while still recognizing how much of a masterpiece it will be.

I find the anticipation around certain high-profile game releases, particularly this one, to be positively refreshing. After all, video games are software, but such anticipation presents a stark contrast to the way many current software innovations appear to be pushed down users’ throats – an increasingly common feeling as more software is delivered as continuously evolving services, rather than individually acquirable major releases. This level of widespread anticipation doesn’t really occur for any other type of software anymore, and hasn’t for a long time: even limiting ourselves to the consumer space and going back a decade, if a new Office, Gmail or Instagram redesign had been delayed for a year, there would be hardly any uproar. There were no “we got [thing] before Windows 11” memes, and anything AI has too many socioeconomic implications to be consensual for the foreseeable future.

The fact that plenty of people still look forward to new games, even as they seem uninterested in most other software developments, is unsurprising: games are entertainment and not tools (usually); people are free to choose what games they play, and don’t generally feel that a choice was forced upon them, so it is natural that they will spend some of their time making such choices, and being excited about upcoming options that they think will suit them. Then, considering that there isn’t a lack of new games being made, the fact that millions of people choose to be interested in GTA VI is proof of the great work that Rockstar Games has done over the years – as is the fact that some of them seem to mainly be interested in loudly complaining that it won’t be to their liking, as if a) they were entitled to having an upcoming GTA title that’s to their liking, and b) they already knew that, tragically, it won’t suit their preferences.

A plethora of displeased pleas

Living up to the full expectations and preferences of millions of players is impossible. Grand Theft Auto V certainly didn’t: some players complained about how the game was missing features and mechanics from prior titles (notably, from Grand Theft Auto: San Andreas); others complained that the story felt weak, missing gravitas, with “plastic” and over-the-top characters that didn’t feel particularly likeable, or with underwhelming endings; others criticized how the world did not offer many structured activities once players are done with the story and the secondary missions, making it a bit less appealing than the worlds of previous games. The mission design also received criticisms for being restrictive, limiting player creativity; this NakeyJakey video includes insightful criticism of such aspects.

A likely larger set of players was left satisfied with the quality of the single-player content, but not its quantity: some wanted an even longer story, others wanted expansions in the style of those made for Grand Theft Auto IV; largely thanks to the theft of GTA V’s source code, today it is known that the initial vision for the game encompassed such single-player DLCs, as well as additional missions and mechanics that, at best, were possibly adapted into GTA Online content, years later.

None of the criticism prevented GTA V from breaking a handful of sales records within the first week of release back in 2013, later becoming the second most sold game of all time, after Minecraft – whose first barebones release had happened three years prior. Reports indicate that, by now, GTA V has generated close to $10 billion in revenue. Yet you won’t have to scour very deep to find good numbers of people saying they preferred one of the earlier GTA titles – whose revenue, combined, didn’t surpass that of GTA V alone.

Great games sell well, but the individual preferences of the player base are hardly fully aligned with revenue numbers. Fortunately, people don’t just play the games they think are perfect. A vast set of mostly satisfied players is definitely more profitable than an extremely satisfied, but smaller, cult following. Will GTA VI be able to appease a record number of people without irreparably disappointing the most devout fans of the series?

I doubt anyone who is looking forward to VI’s release wants to acknowledge this now, but despite the massive budget and development time that went into it, it’s certain that once the dust settles, a good number of GTA fans will continue to prefer the earlier titles, eventually just because of nostalgia. The earlier games have always had fewer fans (for a start, they were released to a smaller, less developed market), but those who revisit these older titles regularly, are the ones who really like them.

Regarding fan nostalgia, the appearance of a PlayStation-like console in the second GTA VI trailer led to some of the wilder speculation going around, according to which the 2002 game Grand Theft Auto: Vice City would be playable within GTA VI, to some extent. In my opinion, this is technically feasible, and would be a fun way for Rockstar to cheekily satisfy those nostalgic players. It is however something which might not have a good reason to exist in the game, besides being a cool technology demo and a novelty. I don’t think it is very high in anyone’s wishlist.

Speaking of speculation, that’s exactly what I am partaking in. I think I’ve made mostly agreeable, low-risk predictions so far – perhaps even disappointingly so – but from reading the title alone, I am sure that many would be quick to trash me in the comments, claiming I hate the game, or Rockstar, or something along those lines, just by alluding to the quite basic notion that a piece of art can’t possibly be perfect to everyone. Fortunately, this is not one of those common social platforms, this is the great innovation called the “personal blog,” one where I encourage such individuals who don’t know what nuance is, to either leave their thoughts in their head or to publish them elsewhere, preferably some place where I won’t be notified about them.

It’s never too late to speculate

I am not truly qualified to speculate on what GTA VI will or won’t be, but then again, who is? Most of the fans speculating, even the most respected social media personalities in the space, don’t have game development experience. More interestingly, I would say that game developers, even those with experience in the very genre of open-world action-adventure titles, and even former Rockstar employees from decades past, likely don’t have enough experience to accurately comment on GTA VI development matters.

We are talking about a game whose budget is the highest of all time, with a development timeline that’s by far the longest of all the games in the series, a level of public anticipation that beats all games except perhaps juggernaut series like Half-Life, and the naturally high ambition that comes with making a successor to the highest-grossing entertainment product of all time. It’s also being developed by what may be the most secretive company in its space. All combined, this is a really unique circumstance. It is safe to say that the only people who could possibly make accurate comments on the development of GTA VI would be those who are working on it, and maybe not even them.

My impression from the leaks and rumors over the years, is that most people working at Rockstar Games are not aware of many of the aspects of their upcoming games. This type of “confusion” doesn’t even have to come from active efforts to keep people in the dark: I work for a company that’s roughly one tenth the size, and I don’t know basically anything about most of the ongoing initiatives – and this is at a place that’s very internally transparent, and where I could easily get information about anything that goes on in the business by just asking… and by paying a bit more attention in all-hands meetings. There’s usually just so much information that isn’t all that relevant to one’s specific role, that it’s easy not to be well-versed in it all.

Note that the development of GTA VI is uncharted territory even for Rockstar Games themselves. This is their usual form, anyway: I think that for every game they’ve made, they’ve never significantly repeated their own development processes, and their ambition has always increased significantly. The story of how VI will have come to be, will certainly be quite different from that of IV and V. Between the lack of visibility most employees probably have into the details of how the process is going, and the fact that it is a complex and lengthy one with plenty of space for small changes to be made still, I wouldn’t take any insider accounts as gospel, even when they are truthful to the source’s perceptions.

One recurring comment I’ve seen online is, paraphrasing, to “let them [Rockstar] cook,” when it comes to discussing leaks, rumors and personal wishlists for the next GTA title – as if public comments were detrimental to the game’s development. The way I see things, we can comment and speculate all we want; people always have, people always will. Of all game developers, I feel that Rockstar is one of the best at shielding themselves from outsider opinions about in-progress work, and the best at only showing to the public what they actually want the world to see (extraordinary circumstances, like the 2022 leaks, notwithstanding). As long as people are not being disrespectful or harassing employees, I doubt speculation is causing any harm, and everything insider sources have ever shared about the topic points to them having great fun with how wrong the fan theories often are.

Some of the rumors currently going around take for granted that GTA VI is currently playable from start to finish and that the game only needs “polishing.” While I definitely believe the former, I highly doubt the latter, unless we use a extremely broad definition of “polishing.” That the story can be played through doesn’t mean that more secondary content like side-missions, open world activities, soundtrack production, and even transversal aspects like localization are finalized – and I don’t think the work on those would count as mere “polish.” After all, there is so much to a GTA game besides the more linear story aspects.

Other rumors point towards the story not even being finalized yet, with the final chapter(s) being stuck in development hell. For me, it is very difficult to believe that the story doesn’t have a conclusion, or set of possible conclusions, written yet. However, I can picture scenarios where the final missions are difficult to realize exactly as originally written, or where the developers have trouble getting these last moments to evoke the desired emotions, and the additional iterations required to deliver good storytelling are causing some churn. I can also imagine a scenario where the story has different possible endings, Rockstar wants the post-credits gameplay to be affected differently by the consequences of those endings, and it’s the realization of these consequences that is taking more effort than expected. Through a game of broken telephone – and a lot of these rumors seem to come from sources that know sources – the rumor would end up becoming that the final chapter is in development hell.

It’s interesting how we know so little about the development of such a giant project that each of us can easily believe rumors that are essentially incompatible with each other, while we are less than a year from the currently targeted release date and have seen two trailers already. It’s particularly amusing, given that the game’s “unofficial trailer” consisted in so much leaked content, including plenty of developer captures originally shared in Rockstar’s Slack workspace. GTA VI is so vast that this much content still doesn’t come close to even telling half of the story.

Personally, I highly doubt Rockstar would need an entire year just for “polish,” particularly since we’re talking about a company with more than sufficient people to tackle many different problems simultaneously. But what “polishing” encompasses varies between points of view, and Rockstar has used this word in the past as a way to justify delays without having to commit to any details. They did so when delaying Red Dead Redemption 2 for the second time, and previously when delaying the original release of GTA V on PC. I don’t see “polish” as anything other than marketing speak for “it isn’t done because it isn’t done.”

The fictional past doesn’t explain the fictional future

Despite having a radically different setting than GTA VI, Red Dead Redemption 2 has been used as the basis for some of the fan expectations and theories about the how the upcoming GTA title is going to play, all the way from particular game mechanics, to the way the story will unfold.

This very much comes down to personal taste, but I am not too enthralled by the notion of a GTA game thats feels too much like RDR 2 did. The level of world detail and visual quality of the latter is top notch, and the GTA VI trailers show that we’ll continue to see improvements in this front. But I wasn’t a fan of the clear separation of the chapters in the story of RDR 2, nor the generally slower-paced storytelling. Despite its world having more structured activities to pursue, even after the completion of the story, I think its setting and the design of certain game mechanics didn’t encourage the “go anywhere, mess around and find out” type of gameplay that’s been a staple of GTA games since the first 3D entry in the series.

Don’t get me wrong, there are plenty of ways to have that chaotic sandbox type of fun in RDR 2 – and one can argue that the absurdity of the chaos is only increased by the serious world tone. However, to me, RDR 2 is at its best when it is either in peaceful mode or when it is presenting mission-driven action/combat; more violent emergent gameplay didn’t feel so good to me, causing very a palpable ludonarrative dissonance that took me out of the immersion. I never felt that with the same intensity in a GTA game; maybe I’m just bad at playing as a low honor Arthur? With this said, I appreciated the more serious stories of RDR 2 and of GTA IV, compared to that of V. My hope is that Rockstar will be able to once again tell a dramatic high-stakes story, while making it feel more action-packed than RDR 2 felt to me, and while still allowing the unfettered chaotic fun moments to naturally take place.

I don’t think all of the fan favorite mechanics of RDR 2 would work well in a GTA game and my view is that the two series have different audiences. In practice these are not wholly disjoint groups of people, but when someone goes to play a GTA game, they’re often looking for an experience that is not exactly that which the Red Dead Redemption games offer; in each of those moments, that hypothetical player may as well be two different people. I would almost argue that the essential aspects of GTA gameplay have to be somewhat “simplified” compared to those of RDR 2. It’s also important to note that the audience for a GTA game is broader than that of RDR, if for nothing else, just because the former is a much more popular brand. Naturally, Rockstar wants players to enjoy their purchase and I would be slightly disappointed but not surprised if, for the lack of a better term, they decided to “dumb down” the core mechanics of their GTA titles compared to those of RDR.

For a specific example of a possible mechanic I am not too excited about, one rumored change in GTA VI is that instead of carrying all weapons at all times, each playable character will have a more limited inventory and there will be relatively frequent opportunities to make changes to those inventories. Essentially, road vehicles would be what horses were in RDR, storing your other weapons and some inventory items. This would require players to think more actively about what weapons to carry at each time – making things more complex, which I am not sure is a positive change. But I also see how this would improve realism and allow the “personal vehicle” to have more impact in how the game is played. In prior GTA titles, outside of specific missions, what vehicle you drove in the open world didn’t really matter, so such an inventory system could be a way to make the concept of the personal vehicle more relevant. It’s an increased “level of detail” for sure, but I suspect such mechanics would feel limiting in those moments where one mainly wants to mess around in some power-tripping fever dream.

Betrayal was one of the main themes of the RDR 2 story and it is already understood, since the first trailer, that the story of GTA VI will revolve around trust. Many people seem convinced that, in the style of what happened in the story of the 2018 release, the two protagonists will betray each other, either out of their own volition or because external forces lead them into it. Two of the three possible GTA V endings also consisted on protagonists turning on each other. If that indeed turns out to be the main plot device once again, then the fact that “everyone” has seen it coming is enough to explain why I’d be disappointed. I am hopeful Rockstar will avoid repeating the same note in essentially the same way, and will be able to deliver something more surprising and equally as intense.

This cutting room floor can fit/fix so many leaks

With such a humongous budget and prolonged development timeline, it’s safe to assume that GTA VI will also have the greatest amount of what’s colloquially described as “cut content:” early concepts that didn’t pan out, removed gameplay mechanics, abandoned plot arcs in the main story, world locations that never came to be, unused voice lines, sound effects and original soundtracks… there are endless categories of things that could fall to the floor of the cutting room of this type of game. In fact, there’s probably enough space in the development history of VI to fit the production of an entirely different game, and if what has been rumored about “Project Americas” is to be believed, that’s sufficiently close to what happened, at the very least, pre-production wise.

For those unaware of this “Project Americas,” because not too much is known for certain, a quick summary follows: this was, allegedly, an early concept for GTA VI that was in (pre-)production from as early as 2012 until circa 2020. The action was meant to take place, at least in part, throughout the 70s, 80s and 90s and in multiple cities from both North and South America, with the main theme allegedly being the coke trade. Some aspects of these allegations have been confirmed by reports from respected journalists like Jason Schreier and Stephen Totilo, but they’ve also denied some of the associated theories floating around. According to this line of speculation, at some point, these initial plans were thoroughly changed or exchanged for ones with a more modest scope, leading us to the current GTA VI concept that entered full production at some point between 2020 and 2022.

Within GTA datamining circles, some believe that many of the assets made for this earlier game concept ended up being repurposed and adapted for GTA Online’s Cayo Perico, a location which conveniently has a South American theme. It is unknown whether the upcoming GTA game retained the “Project Americas” codename, or whether we are free to use that to exclusively refer to the allegedly canned plans – why do people with access to Rockstar insiders never think of asking these pressing questions?

Personally, I believe that Dan Houser’s departure from Rockstar Games in March 2020 may be connected to this change of plans, but the direction of causality is unclear. Regardless, even though I can’t quite explain why, I find more appealing the idea of yet another GTA taking place in the present day, than that of a historical piece like the original Vice City. I would like to see Rockstar explore that original Americas concept, but within its own new IP – if they’re ever going to move away from GTA and RDR, that is.

If the development of VI really was kind of rebooted at some point, certainly that doesn’t mean that the previous effort was completely wasted: for example, progress in areas like the core of the game engine and development tooling is always going to be cumulative, and as mentioned, it is believed that some of the world design and asset modelling efforts were repurposed for GTA Online, while others were likely still valid for the current Vice City concept. A well-preserved 80s car is supposed to look and sound the same no matter if it’s 1985 or 2025, after all.

With Rockstar being busy with the release of GTA V and the development and release of RDR 2, I find it unlikely that “Project Americas” ever went significantly beyond a pre-production phase. To enter full production, I believe that most resources would only have become available after the release of RDR 2. This means that “Project Americas” received full-steam development focus for just one to two years before the supposed change of plans. In decades past this would be enough time for Rockstar to make a hit, but in the latter half of the last decade, that was really not the pace and scope they were aiming for.

So why would this intriguing concept not move towards a final product? One hypothesis is that, as more concrete aspects came together – including, perhaps, playable vertical slices – they did not pass the internal vibe checks, prompting a deep rethink of the entire thing. Another possibility is that writers and stakeholders, due to irreconcilable visions regarding what ideas would be most commercially successful, could not agree on a finalized concept for the story and setting – this would be the narrative where Dan Houser’s departure could be connected, but we have no proof of such connection.

For the hypothetical reboot of the GTA VI concept, the justification I favor the most is really just that of scope management: Rockstar likely realized that, with the great-but-certainly-not-infinite resources and time at their disposal, they could release three or four GTA Vs worth of content in that single game – one Los Santos’ worth per each desired combo of city/historical setting – but without being able to meaningfully advance the quality of the storytelling and gameplay within each of these separate settings. Instead, they decided to focus on a single location and time period as per usual, to deliver something that is, simultaneously, undoubtedly perceived as “next generation” while also being “safer” from a business perspective – matching what, I wager, are the expectations of most current GTA fans for what a GTA game should be.

I wanted to explore what might have become of the alleged “Project Americas” mainly to drive home the notion that, if rumors are to be even just partially believed, Rockstar is not afraid to give up on concepts that they’re not particularly happy about, perhaps even shelving good parts of multi-year efforts. Therefore, just because something was seen in the infamous leaks of 2022, or even any subsequent leaks, that doesn’t mean that something is confirmed to be in GTA VI. The cutting room floor for GTA VI is definitely more expanded and enhanced than any published GTA V edition ever was.

Lots of time, lots of money and lots of people contributing towards the same project allows for plenty of experimentation and perceived “waste.” Entire concepts, mechanics, features, locations, characters, story lines… they may all come and go, and come back only to be abandoned again. Incompatible options being worked on in parallel, explicitly to be pitted against each other, with just the best fit surviving. With this in mind, it’s expected that many of those working on the game still don’t actually know what will make the final cut. Besides, the videos and information obtained from the intrusion into Rockstar’s Slack space in 2022 showed lots of content that was already outdated even back then, so imagine what may have happened after three more years and with one still left to go until release.

Fan initiatives like the GTA VI mapping project have taken much of what these leaks showed as fact, as proof that certain features will be in the final product, or that certain parts of the world will look a certain way. While, map-wise, the trailers and official screenshots have been mostly consistent with the leaks, the relative stability of the already known parts of the map doesn’t tell us anything about the many other aspects of the game. For example, some fans became convinced that the playable characters would be able to go prone, because it was shown in a leaked test video, but it could be something that was never finished, which was later removed for the sake of simplifying player movement options, or which will only be available in very limited scenarios.

Similarly, features that exist in prior Rockstar titles don’t necessarily have to be present in GTA VI, and even when they are seen in those leaked videos, for all we know, they might be visible there only because they were already implemented from a previous game. Later, someone may decide to remove them, or maybe they break as development goes on, and fixing them isn’t considered a priority. For a random example: bowling did not reappear in GTA V.

In the past, I wondered whether the 2022 leaks would lead to Rockstar changing aspects of the game so that it wouldn’t be as spoiled by the leaks, or to claim victory over the hackers in some way, by purposefully invalidating the extracted information. We now know that the main protagonists haven’t changed, but some of the most recent rumors – which I don’t find particularly convincing – mention that certain side-missions featured in the leaks have been cut. If that’s true, I don’t think the leaks will have been the main motivation. In the particular example mentioned by that gossip, the relevant leaked-and-allegedly-cut dialogue involved Jay Norris (parody of the Zuckerberg-like personality), so perhaps the true motivation for this particular removal had to do with real world developments around social networks like Twitter/X and TikTok. Regardless of whether the leaks led to changes in writing, the combination of the ever-expanding cutting room floor with the release of more marketing materials will gradually decrease the relevance of the improperly publicized data. This will, quite literally, fix the leaks.

In addition to people’s ideas changing over time, people working on the game also come and go. I mentioned Dan Houser’s departure, and there was also Lazlow Jones’s departure, and years earlier, Leslie Benzies’s not-so-peaceful departure (who went on to direct MindsEye, the infamous self-inflicted disaster of a game). These are just the well-known names; plenty of other people, most certainly including some people in positions of artistic direction, have come and gone over the twelve years since GTA V’s release and nearly seven years since RDR 2’s release. As people come and go, ideas gain and lose champions; within Rockstar, the opinion about what a present day GTA game should be like will keep changing all the way up to the release of VI, even as an increasing number of aspects are gradually finalized.

Between the possible canning of the entire original concept for the game and the more mundane iterations all game aspects go through, I can’t help but wonder if, once the dust settles, we will end up feeling that the breadth and depth of GTA VI doesn’t represent what was expected of a twelve year wait for a sequel. Sure, we must keep in mind that RDR 2 was developed and released in the meantime. But even with that in account, we must acknowledge that the passage of time, by itself, introduces inefficiencies and confounds the development process.

Besides the mentioned evolution of ideas coming from within Rockstar, the world outside their studios also kept advancing throughout this decade: technology capabilities, players’ expectations, and investors’ expectations are all amplified now. It’s not like Rockstar will take their ideas from the mid 2010s, get thousands of socially distant monks to put ten years of linear effort into their next game, and release a game from 2013 in 2026. They certainly have spent some effort just keeping up with the times, and that may have caused more back-and-forth than anyone will be able to accurately account for.

We can’t do anything about how much and what content gets cut, nor about whether our most desired combination of concepts and features ends up making the release, so I think the next best thing we can hope for, is a repeat of what happened with GTA V, where some of the cut content and mechanics eventually were repurposed or reimplemented in later expansions. And if I’m allowed to dream a bit more, then let’s hope they won’t be exclusive to Online.

It won’t be a Grand Kitchen Sink

It’s impossible to throw every possible gameplay mechanic, every single movement feature, every imaginable story arc at the same wall, and have them all stick the landing. I am not sure GTA VI was ever intended to be the “everything game” the way some apps want to be the “everything app;” such a proposition would definitely appeal to shareholders, and may even sound nice to many players, but realistically it’s impossible to define.

What genre would an “everything game” be? If one nevertheless tries to make such a thing, I think the result would be confusing to play, would struggle to tell any story, and could be too overwhelming to not even be a good sandbox/simulation game – more or less as soon as players reached the part where one, after dusting the virtual pantry in a detailed minigame of sorts, has to realistically craft their explosives from household parts… eventually getting interrupted and having spend real years in fake prison, from which an army of space units can be assembled in order to break us out, but only after we build an efficient factory to turn raw materials into such units (alternatively, one can post bail with real credit cards!). Oh and there would be chess, and poker, and soccer, and real money gambling somewhere in that “everything game,” too.

It’s easy to ask for features when we don’t actually have them all implemented to try out simultaneously. But one may argue that some of the mechanics people commonly mention – like limited weapon inventories, realistic vehicle fuel consumption, more complex police behaviors, an economy with more depth to it, more varied and impactful character customization options, and so on – have actually been implemented, and in GTA V no less, by modders, including in roleplay (RP) servers. This is notable, as some of these things sound like they’d be more challenging in multiplayer contexts. If motivated hobbyists can make things happen, why won’t Rockstar?

It’s true that GTA RP has been very successful, but is it the type of experience that the next game should be designed around? I don’t think so. I don’t believe that such mechanics are sought after by the majority of the GTA audience. I doubt most people look to GTA games as a general purpose “real life simulator” or even “profession simulator,” which is what, from my point of view, most RP servers try to be (to different degrees of depth or “seriousness”). My belief is that most players want GTA to present a good diorama of the real world with some good satire, some violent action and some appealing human physique mixed in, but they don’t want to simulate anyone’s real lives in great detail; that’s not what most people look for in fiction, anyway. The fact that the world and base mechanics of modern GTA games make pretty good basis for RP games is coincidental.

I think most roleplayers would agree that some of the mechanics found in RP servers would get in the way of effective storytelling and would generally reduce the entertainment factor if they were a mandatory aspect of a GTA game. Even considering GTA Online exclusively, and for all the faults of Online: I would miss the non-RP official experience if RP servers were the only multiplayer option available.

I think it is positive to allow “RP-like interactions” to happen whenever they don’t disturb the “regular” gameplay, but I am really not looking forward to a GTA game full of slow, action-blocking animations like the many in RDR 2. At the same time, I will be slightly sad if certain aspects of real life are not better represented in GTA VI, with the aforementioned fuel consumption being one of them. Balancing all systems will be the art of the craft. Maybe that’s what they mean by “polishing,” after all, in which case I can definitely see one year worth of work feeling short.

Then there are features which don’t really interfere with any other features, and whose presence will be up to how Rockstar decided to allocate resources, specifically, how much they’ve decided to spend on things that are not essential to the storytelling. One example is the sports activities: in theory, the game can contain everything from fighting to basketball, including tennis, football and soccer, golf, heck, even esports could be featured! Would it make sense? Probably not, but would it be impressive from a “look how many things this game has” perspective? Definitely!

The Politics Policy Police

For a game to act as a great diorama of the real world with some satire mixed in, it is inevitable that it will contain some references to the societies we live in. I’d say most story-driven games feature some level of social and political commentary. Sometimes it is more obvious and on-the-nose, sometimes it’s just in the subtext, or left to interpretation. Games by Rockstar have contained satire and commentary of all varieties and subtleties, and it’s impossible for GTA VI to not have some of it too.

The very mention of “Theft” and “Auto” in the title of the series implies that automobiles and crime will be involved – what does that say about our societies!? It definitely indicates that there is private property, crime, and automobiles, in our societies, and since it’s a game title, it implies that such aspects are liable to be portrayed in a video game! That is political commentary. Unfortunately, I doubt a game series called Grand Theft Train would have found the same level of success. What does that say about our societies?!! I can’t believe anyone would make such a conservative, consumerist game that simultaneously glorifies car-centric planning and has a biased recommendation of which vehicle types are worth stealing!

We know GTA VI will, once again, take place in a fictional version of the USA, in a present-day setting. Therefore, many current US-centric themes are to be expected. To people in some cultures or with certain political orientations, this focus in the US is, by itself, considered political commentary; what does that say about their societies?! I’m hearing this shtick is overused, so let’s move on. But really, it’ll be impossible for the next GTA not to touch politics, even if inadvertently, and nobody will be happy about how it will do so: if the game’s satire is more subtle than previous titles, some will complain “they’ve gone soft!” If it appears to be too overt in its criticism or very targeted in its mockeries, then, depending on the perceived political leaning of those artistic expressions, some will either complain that “it’s woke!” or that it has caved to corporate/conservative/right-wing interests. GTA VI could be perfectly balanced when it comes to criticism of the entire political spectrum, and many would still only see the parts that don’t align with their vision.

As usual, Rockstar’s mission will have been complete if absolutely everyone is outraged, and yet nobody resists playing the game, even if just to see what it’s actually about. I think everyone agrees that this developer likes to push the envelope of what mainstream games can feature, particularly in their GTA titles; to cause some controversy, or at least to spark public discussion, seems to have been a secondary goal for every game in the series. Sometimes the controversy doesn’t quite have the expected causes and gets out of control, like what happened in the infamous Hot Coffee case, back in the San Andreas days. Still, some healthy amount of it only helps the marketing efforts.

To tell the game’s main story, GTA V probably didn’t need a mission where players torture a random guy. However, the impact of the political commentary intended in it, would have been greatly diminished if the mission didn’t play out like that. Some players would definitely have preferred if such commentary remained more on the sidelines, where it could be more easily ignored. For many artists, one of the goals of their work is to evoke emotions and thoughts, but if the art can be easily ignored, then it can’t really do that. That mission is like guerilla street art that’s ugly and unnecessary, and yet impossible to ignore, created specifically to cause the public to react, to “feel something” – as cliché as that sounds. I hope GTA VI tries to make people feel something too, preferably without resorting to shock value.

In ten years, GTA VI might be seen as less of a period piece than how IV and V are seen today. While GTA has never been no latest news commentator, this next installment might focus even less on current topics and might even end up feeling like a “safer” art piece than previous titles. If that turns out to be the case, I don’t think it will have been necessarily due to Rockstar “going soft,” or because of pressure from investors, or anything like that. I think it’s more that recent years feel very fast-paced both within the US society and also worldwide; keeping up with such developments would be challenging in the context of the extended development period. Foregoing such actuality might be the only way to reach a decent end result.

Just five years ago, some people were rightfully worried that our collective stupidity might not outlast a virus; would you be satisfied if the representation of such topic went beyond a secondary mission or two in the world of Vice City? The problem with trying to capture the current world in a GTA game is that it could cause the game to age quite quickly and relatively badly, coming across as being tone deaf and out-of-touch with present times, rather than as good satire of a certain period of modern history. Certainly, Rockstar isn’t developing VI with the intention of releasing VII just two years later; they want the game to remain relevant and palatable for eventually as long as GTA V did, and in that sense, it would be prudent to focus more on the constants that never change, instead of capturing and mocking this decade’s personalities, recent trends and scandals. And what better place to portray people’s vices, than Vice City?

Going back to the alleged concept of “Project Americas,” I wonder if the idea to revisit the later decades of the 20th century was motivated by the difficulty of dealing with the uncertainty of the present time in the face of a lengthy production process. This excerpt from an interview Dan Houser gave to GQ Magazine two days prior to RDR 2’s release provides a lot of insight into how they were seeing the current world, back then:

Dan Houser is “thankful” he’s not releasing Grand Theft Auto 6 in the age of Trump. “It’s really unclear what we would even do with it, let alone how upset people would get with whatever we did,” says the co-founder of Rockstar Games. “Both intense liberal progression and intense conservatism are both very militant, and very angry. It is scary but it’s also strange, and yet both of them seem occasionally to veer towards the absurd. It’s hard to satirise for those reasons. Some of the stuff you see is straightforwardly beyond satire. It would be out of date within two minutes, everything is changing so fast.”

Source: GQ Magazine, October 24, 2018

The “age of Trump” never really went away, and Dan Houser left Rockstar before his first term was even over. Regardless of whether, at the time, Rockstar were pre-producing the alleged “Project Americas” concept or something else, Dan Houser’s comment remains painfully relevant today: reality is imitating satire to the point where it’s sometimes a carbon copy, and jokes can become unfunny within hours.

Dan Houser was clearly aware that any GTA released then – as now – would be seen in a political light by the public, and seemed a bit unwilling to handle that in the political climate from 2018. Both within the US and globally, that political climate wasn’t even as hot as the current one! I imagine Rockstar’s appetite for dealing with that particular type of mixed reception hasn’t increased dramatically since then, and that’s why I bet that most satire and mockery in GTA VI will try to avoid overtly sticking criticisms to either side of the political spectrum.

In the end, it is impossible to please everyone when it comes to making a representation of our current world that’s meant to be simultaneously realistic and satirical. The more realistic the “diorama” looks, the harder the satire hits and the more outrage it will cause. To be a commercial success of the expected size, the next GTA needs to appeal to as many people as possible; if it ends up being more muted in the political commentary, in my opinion, that won’t necessarily be a bad thing. Many use games to take a break from the more complex aspects of the real world, and if politics aren’t as obviously present in the game, maybe it will manage to be a better safe harbor for those players.

How to sell Moore copies

We have gone through a total of five releases of GTA V: first for PS3 and Xbox 360 in 2013, followed by a PS4 and Xbox One release in 2014, the first PC release in 2015, the PS5 and Xbox Series release in 2022, and the PC Enhanced version in 2025. Unfortunately for consumers, only the latter was fortunately free for owners of the previous PC release. These many re-releases certainly help explain how the game sold so many copies.

This staggered approach to multi-platform launches, particularly the PC release, has been a staple of the GTA series and Rockstar titles in general, since essentially their first games. It has been rightfully criticized by many, who claim it is mostly used to encourage players to buy the same game more than once. This is something that they may want to do for reasons other than convenience, as the later re-releases tend to contain various improvements and additional features over the initial release. Not to mention, the PC version has traditionally enabled game modding – something which I hope will remain practically possible in VI. Unofficial, unvetted modding will always be superior to whatever moderated content creation tool Rockstar might allegedly be building into GTA VI.

A decade later, it’s sufficiently evident that GTA V was held back in multiple aspects, for having initially released on the seventh console generation – that of the PS3 and 360. One of the only controversies surrounding the GTA V trailers is about the high number of trees and other vegetation showcased, that was pared back before release – perhaps to get the final product running decently. Players have also criticized the simplified physics and the not-as-destructible world elements, compared to those of the predecessor – compromises probably made to free up enough compute performance to realize new mechanics and the otherwise more detailed world. RDR 2 shows some of the aspects in which GTA V would likely have been grander, if it had released on the eighth generation exclusively.

Well over a decade passed since Moore’s law started being declared dead, and we are two console generations past the one where GTA V debuted. The increased costs of PC hardware, particularly GPUs, reflect both the increased cost of manufacturing chips on bleeding edge processes, and the increased demand for GPUs outside of the gaming segment. It is highly likely that the next console generation is going to bring less of a performance leap than prior generations, or that the current one is going to stick around for longer, or even that the next generation won’t be nearly as affordable – might the Nintendo Switch 2 pricing be an early taste of this?

My prediction is that GTA VI will age even better from a technical standpoint than V did, barring unforeseeable improvements in hardware capabilities over the next decade or so. In the graphics department, the current console generation supports just enough raytracing acceleration to justify the use of a rendering pipeline that takes advantage of it, and the hardware architectures and APIs of current consoles are very similar to what is presently available on PC. One of the main areas where GTA re-releases have presented improvements over the earlier ones is graphical fidelity, and when it comes to this aspect, I imagine that between the console release and the PC release, Rockstar won’t have to make many adjustments besides those mandated by quality control and those needed to offer additional graphical options.

As soon as a PC release is out, I believe that players who have the means to play that edition properly, tend to prefer it over the console releases. (The only reason why this wasn’t always the case with GTA V, had to do with more cheating shenanigans in Online on PC compared to consoles, but I am making the assumption that Rockstar will have that sufficiently tackled in VI). The thing about the PC platform is that it doesn’t typically make consumers buy software again in order to take advantage of hardware improvements, and therefore Rockstar’s ability to indirectly use Moore’s law to sell people new copies of the same game might be diminished, if the market share of the PC platform keeps increasing – either through conventional desktop and laptop PCs, or through the new type of handheld PCs pioneered by the Steam Deck. Fortunately, with GPU prices being the way they are, it isn’t certain that PC will keep growing, but then again – it’s also uncertain whether consoles will remain affordable.

The rumor mills of the Xbox variety suggest that Microsoft may be preparing to make it so that Xboxes are more like PCs, and at a limit I can see them becoming just Windows PCs running under a dedicated mode (possibly like the “S mode” in Windows 10 or 11), turning the Xbox brand from bespoke hardware consoles into more of a badge certain PCs can wear – much like Valve’s old concept of Steam Machines, except powered by Windows. In such a scenario, Rockstar Games would also need not re-release updated versions of titles to take advantage of new Xbox hardware, because Xbox generations as we know them today would likely cease to exist. Rather, the certification baseline (that is, the “minimum requirements”) for hardware to have that “Xbox badge” would just keep increasing. While this is relevant as far as long-term speculation goes, I wouldn’t expect such moves from Microsoft to have a meaningful impact in the launch strategy for GTA VI, certainly not for the first couple years of its life.

Regardless of the foreseeable hardware improvements, I believe it is safe to assume that a large part of the GTA VI budget must have been put into future-proofing the technical core of the game and also its user interface, such that both can better stand the test of time. It’s unlikely that the original GTA Online was planned to be maintained for over a decade, and Rockstar had to improvise some things as they went, taking care not to break the story mode and functionality like the Rockstar Editor in the process (with mixed results, I must add). There’s also a large collection of inconsistencies and small bugs which, probably, only came to be due to architectural inefficiencies.

I wonder about all the features that might have been brainstormed for GTA Online at one point and were never actually pursued, due to the game not being really prepared for them, especially on the earlier, more limited console hardware. Tenuous rumors, originating from patents filled by Rockstar, point towards the possibility of the GTA VI map receiving significant expansions, and the virtual world generally being more prepared to change over time, possibly beyond what the engines of their previous games were really designed to handle.

My personal wish is that in addition to new and refreshed locations, ideally Rockstar would have a vision for GTA VI updates that would allow the introduction of new gameplay mechanics, without them feeling tacked on. With GTA Online, we saw an interaction menu that kept increasing in complexity indefinitely, until it looked more like a debug tool, than the primary way for players to interact with so many new features. There was also the problem, since essentially day one, that content and options were spread throughout three menus (the main one, the interaction one, and the phone), hurting discoverability. At one point, Rockstar began to remove older, unpopular content in an effort to make mission selection menus more friendly – or at least that was their justification. These problems with feature interaction and content discovery are the sort of thing I hope they will definitely fix in VI.

It would also be undoubtedly cool if such updates were made available to story mode, rather than keeping the single-player content relatively frozen in time, like what happened in GTA V. For a baffling example, Rockstar added more radio stations to Online over time, and even though the first few of these additions were available to story mode, the more recently added ones aren’t. However, realistically, I only see that type of evolution happening if the single-player and multi-player modes are more intertwined in GTA VI than they were in V, and that has other implications I am not too optimistic about.

Check expectations/speculations

Why, oh why, am I doing this? I didn’t want to write a post with my bucket list for GTA VI. I really wanted to focus more on a meta-commentary of all the speculation going around. There’s still a lot to discover, or at least confirm, about their next release. I have the impression that people often tend to over-analyze the materials that have been put out – both the official and the leaked ones – and seem to forget about all the things these materials don’t show. Especially if we ignore the 2022 leaks, which are definitely very out of date by now, we know next to nothing about most of the aspects that make an action-adventure open world game, including:

  • The story: we barely know anything besides the names, basic descriptions and general motivations of some of the characters, but we have no idea of whether that list of key characters is complete, nor how much of a “twist” there will be in said descriptions and motivations.
  • How the storytelling will take place, from a logistical and organizational standpoint: will there be clearly defined chapters? Will we be somewhat limited in what we can do in each chapter (like in RDR 2)? Will it be a mostly linear story – as per tradition – or will it actually have more player choices with impactful consequences? Will there be meaningful secondary, perhaps optional, story arcs?
  • How it will actually feel to play the game: will we feel enough freedom to ignore the story aspects and just “mess around” when we so wish? Will the game be as good of a sandbox as the predecessors? Will the missions continue to be mostly linear and relatively full of failure conditions as soon as you try to approach them in a creative way? If GTA VI is to have a more dramatic tone for its main story, will it still feel fine to cause a huge ruckus, including police shootouts? Will we continue to be able to save almost anywhere outside of missions, and spawn on the same place when reloading saves?
  • What story mode activities will exist outside of the main story: besides secondary missions, will the “random” world events continue to be mostly scripted mini-missions taking place in predefined locations? What mini-games and sports activities will be at our disposal? Will we still have prop collectathons like the ones that plagued GTA V and Online?
  • What Online will be like: a giant topic that we know nothing about. From what I’ve seen, most of the speculation centers around the acquisition of FiveM and one or two patents about methodology for session management (patents which, for all intents and purposes concerning any of my past or future inventions of mine or of my employers, I know nothing about). This acquisition has, sometimes, been used to argue that Rockstar will focus more on the RP style of gameplay, particularly for Online.
    • Regarding FiveM matters, I recommend everyone who is interested in this topic to read the long collection of information over at fivem.team. It’s an even longer write-up than this one, and some of it is speculative and likely biased or one-sided, but it shows receipts for many of its reveals, and it provides a unique glimpse into the somewhat secretive team that was cfx.re/FiveM prior to acquisition, and also into the small part of Rockstar involved in those matters. It is the reminder this essay is otherwise lacking, that not everything Rockstar touches is gold, in fact, it’s sometimes the contrary – and people, even inside Rockstar, have gotten hurt.
      If you, like me, thought that the illusive personality known as “NTA”/”NT Authority” was a bit… controversial, but didn’t know much beyond that, you are in for a treat that will change the way you perceive not just NTA, but also other personalities involved in different multiplayer game modding projects over the years.
  • Many of the more technical aspects beyond “graphics” and “attention to detail:” what will different weather conditions look like? What’s the vehicle destruction model like? How do NPCs react to our actions? Lots of speculation going around, very little official information.

I have personal preferences and wishes regarding many of these topics, but I don’t think they quite reach the point of being expectations. Particularly when it comes to the official marketing assets released so far, people tend to read between the lines, extrapolate a bit more, and then set expectations that may not be realized. Motivating these thoughts and discussions is pretty much the point of releasing such materials, but when it comes to a highly anticipated title like this one, the speculation reaches levels that, in my opinion, probably go a bit beyond what’s desired by the game publisher.

In this age of review bombing, “influencers,” instant communication, decent refund systems (in decent stores/jurisdictions), which make first perceptions matter more than any retrospective, no reputable publisher would want the first reaction from players to be one of disappointment, especially when the game in question is all everyone in this space will be talking about during its release month – making it so that any disappointing aspects would be endlessly parroted. With the massive anticipation for GTA VI and the budget behind it, a final product that clearly doesn’t match what is in the trailers, or which is otherwise troubled – like Cyberpunk 2077 was at launch – would not just cause enough damage to the GTA and Rockstar brands to trigger a sell-off of the TTWO shares; it could very well cause another video games industry crash.

I’ve been mostly dismissing the opinions of those who seem convinced that the GTA VI trailers are all a great con; that Rockstar is attempting to replicate Ubisoft’s “success” when the latter showed trailers for the original Watch Dogs, that greatly misrepresented different aspects of what the final product would be like on contemporary hardware, particularly when it came to graphics. I can’t come up with a good reason why Rockstar would consciously opt for this strategy: any benefits of doing so (like increased day one sales) look like they’d be completely undone by the aforementioned reputational hit (which could have impacts over the intended multi-year lifespan of the game, particularly the multiplayer portion). But then again, with the amount of games that seem to release nowadays in a bit of a bad quality control and performance state, I definitely see where some of the worries come from.

Rockstar has historically produced final products that exceed the earlier marketing previews in most aspects, products which tend to be extremely competent in both the technical and artistic departments. However, the great debacle of the “Trilogy Remaster” – about which I could easily write its own entire essay – has understandably soured many people’s mouths, including mine. I am absolutely convinced that the technical story for GTA VI will have nothing to do with that, and will be in line with Rockstar’s usual form, the one that squeezed GTA V into the PS3 without making the type of haphazard compromises that Ubisoft had to make to squeeze the first Watch Dogs into the same console. Briefly, the reasons: there are orders of magnitude more technical effort behind GTA VI than behind the infamous remasters; the remasters were not a flagship product the way an original new game is (they always felt to me like more of a cash grab attempt); the remasters were handled by a quasi-external studio, unlike VI which is the current priority of essentially all of Rockstar’s studios working in tandem; the remasters had to preserve much of the behavior of two decades old code and integrate it into a modern engine, while VI will be no such frankenstein. And finally: I highly doubt they’d run the risk of fumbling two major releases in a row.

So I do indeed have some baseline expectations for what GTA VI will be like – as the title of this essay indicates, I expect it to be a masterpiece. I just try not to have too many expectations about the specifics of what such will entail. To not have expectations is even better, and easier, than to keep them all in check – a wild thing for someone who’s mainly writing speculation to say, I know. Trying to have this sense of detachment makes the wait for the final product definitely more boring, and can easily come across as pessimism. At the same time, I don’t want to completely ignore everything leading up to the moment I play the game, especially when that would require that I stop following the news and discussions about topics that interest me a lot, for years!

My initial contacts with the first GTA I played by myself, GTA V, took place well after its launch. Prior to playing it, I hadn’t consumed any of its promotional materials, read any of the anticipatory discussions, not even any post-release reviews. This was because I only really started playing such major game releases when I finally put together a sufficiently powerful PC in the latter half of the last decade, and prior to that, I really didn’t care that much about games. I was obviously aware that GTA V was a major best-selling release – hence why it was one of the first games I played on that PC. Still, I had no expectations besides the general idea of what GTA gameplay vaguely looked like, mostly from seeing colleagues play very butchered versions of GTA: SA on school computers, nearly a decade earlier.

In retrospective, and having now looked at its promotional materials and knowing more about all the pre-launch anticipation and speculation, I think that being able to dive into GTA V from a point of nearly-zero knowledge was a better experience than if I had eagerly awaited the game for years. I can’t quite explain why I feel this, but I wonder if this is why, besides pure nostalgia, many people prefer the first GTA they played? Or even the first open-world action-adventure game in the same genre as GTA? Because they went into it with fewer expectations and “dreams,” not even those originating from playing a different game in the same genre, and therefore had more jaw-dropping, mind-opening or just plain fun moments than they would otherwise?

I suppose I am trying to balance the way I experienced GTA V, and later other games by Rockstar, with the natural excitement stemming from wanting more of what I liked about these games, and with my interest in the extremely long development cycle of VI. I know I will be diving into it with more expectations and more of a wishlist than I ever had for any game in its genre, but I also know I will be nearly a decade older than I was during my first V playthrough… I have different opinions now, and much more experience with the games medium, so the circumstances would be different anyway. And following the trailers, and a bit of the speculation, does indeed make the wait less boring, even if it may ultimately make the actual final product more disappointing.

Shielding ourselves from marketing materials, fan theories, and general news about an upcoming major release is difficult once we have an interest in that particular type of game, and we are bound to create expectations, so the next best option is to keep them in check, becoming prepared for small disappointments in certain aspects. I think that some people are unable to do this in their thoughts alone, and as a way to cool themselves down, overcompensate in the “pessimist” direction and end up claiming that the trailers are “faked” in some way, for instance. Clearly, I too was unable to quietly calm my expectations, as I couldn’t forego writing this essay, indulging in speculation as I went. I’m convinced this is, above all else, a coping mechanism.

The largest ever (s)cope

Regardless of being impatient about the release of a sequel to a favorite piece of media, dreaming about what we would(n’t) like it to be helps us understand and refine our personal taste, guiding our exploration of the medium. This is also true for games, and for the GTA franchise in particular, it feels necessary. With Rockstar taking more and more time between game releases, even though such releases have enormous amounts of content to explore, people who enjoy open-world action-adventure games eventually feel compelled to explore other games, from other developers.

The exposition to the original ideas present in other games can reflect back on the way we perceived our favorites. It adds more points to the fuzzy cloud of things (mechanics, plot points, even more minute things like soundtrack ideas!) that we consider for inclusion in the imagined sequels to our favorites. It can also expose some flaws and clichés of our favorites, that we would otherwise not notice – but that only makes us yearn more for a fresh sequel, that will hopefully improve on those aspects!

Besides being harmless pastimes, speculating and developing theories about secretive upcoming titles can be an avenue for us to rationalize our wishes for them, to try to make our wishes fit in the reality of what is progressively divulged about them. I imagine these activities are most often associated with happy emotions, but they can also be ways to deal with a subconscious fear that the end product may not completely be to our liking.

I think it’s natural to wish for a sequel that matches our preferences even better than the original did, such that the sequel becomes our new favorite, or at least, one of our favorites. And when it comes to Rockstar games, if GTA VI in particular isn’t sufficiently to my liking, then I already know I’m probably not seeing a new GTA in a decade, at least… and with the way the genre is going, it’s unlikely any other developer will create a game with an equally impressive scope (CDPR with Cyberpunk 2, maybe?). So I definitely worry that GTA VI will miss the mark from the point of view of my personal, subjective, specific taste. Actually, I often paint the worst pictures in my mind (PC release not coming before 2030! No mods! Mandatory anticheat even for story mode! Shark cards in story mode!), but that’s definitely more of a “me” thing.

As someone who is quite interested in how these large projects come to be, my GTA VI daydreaming is often not so much about what the end product will be like, but about the processes that led to it becoming what it will be. This is definitely not an uncommon thing – there are plenty of communities centered around datamining specific games, sailing for evidence of cut content, beta builds, early concepts, etc., all in hopes of understanding how these products become the way they do, perhaps understanding why some of the things we most awaited in a game didn’t end up making the cut. Post-release, we seek this understanding, not to rationalize our anxiety about what an upcoming game will be like, but more to perhaps calm our subconscious disappointment over why it didn’t completely turn out the way we envisioned.

In this sense, I look forward to the small disappointments of GTA VI as much as I look forward to its jaw-dropping moments, and I’m definitely eager to learn more about its development history. We still don’t have a super clear picture of GTA V’s production process, after over a decade and multiple significant leaks, including that of the full source code. It is therefore probable that VI’s history will remain elusive for just as long. I’m sure that, while still in development, GTA VI already has a more interesting development timeline than most games made before: bigger budget, bigger wait, bigger scope, bigger drama – and more coping from fans than ever before, as we wait for, hopefully, the 26th of May 2026. Still, Rockstar Games could have it worse: they could have the mission of making a Minecraft sequel.

I remade the GTA VI trailer in Watch Dogs

I did not want to close the year without adding something to this website, and to keep with the theme of the blog post series I might never finish, here is another post about Watch Dogs… but this one is more of an audiovisual experience.

That’s right: for comedic purposes, I used Watch Dogs to make an high-effort recreation of the first trailer of GTA VI. Despite still being just a trailer, a bunch of rumors, and a vaster-than-usual collection of leaks, that trailer may have hyped and warmed some people’s hearts more than whatever “game of the year” had – and we are talking about a year which had a bunch of very good games releasing. You can tell that piece of media from Rockstar Games tickled something in me too, or I wouldn’t have spent probably over fifty hours carefully recreating it using a game from a different series.

Soon after the trailer dropped, I got the feeling that I wanted to parody it in some way. The avalanche of trailer reaction content that came immediately after its release – including from respectable channels like Digital Foundry, who probably spent as much time analyzing the trailer from a technical perspective, than they spend looking into some actually released games – had me entertain the idea of making a “Which GTA VI trailer analysis is right for you?” sort of meta-analysis meme video. But I realized that making it properly would require actually watching a large portion of that reaction content, and I was definitely not feeling like it. It would also require making a lot of quips about channels and content creators I am not familiar with. Overall, I don’t think it would have been a good use anyone’s time: I wouldn’t have had as much fun making it, and it wouldn’t be that fun to watch.

The idea of recreating the trailer in other games is hardly original, after all, I have heard about at least two recreations of the trailer in GTA V, there’s at least one in GTA San Andreas as well, I hope some have made in Vice City because it just makes sense, and in the same vein as mine, there are also recreations in different game series, including in Red Dead Redemption and in Saints Row. As far as I know, mine is the first one made in Watch Dogs.

I did not intentionally mean anything with the use of a game whose reception was controversial because of trailers/vertical slice demos that hyped people up for something that, according to many, was not really delivered in the final game (hence the nod to E3 2013 at the start – RIP E3, by the way). Nor is the idea here to say that Watch Dogs, a 2014 game, looks as good as what’s pictured in the trailer for GTA VI, a game set to release eleven years later. Largely, I chose this game because, for one, I like Watch Dogs even if I am not the most die-hard fan you’ll find; because it is the only game other than GTA V where I have some modding experience; and because nobody had done it using Watch Dogs.

This was everything but easy to pull off: the game doesn’t even have a conventional photo mode, let alone anything like the Rockstar Editor or Director Mode in GTA V. There aren’t many mods for the games in the Watch Dogs series, especially not the two most recent ones, and the majority of these mods aren’t focused on helping people make machinima. One big exception is the camera tools that I am using, and even that was primarily built for taking screenshots – keep in mind I had to ask the author for a yet-to-be-released version that supported automatic camera interpolation between two points.

I started by recreating just a few shots from the beginning of the trailer. I liked those brief seconds of video so much, and they sparked enough interest in the modding community, that I slowly went through recreating the rest. This required bringing more mods into the equation – including a WIP in-game world editor that was released with light protection measures (probably to avoid people bringing it into online modes or adding it into shitty mod merge packs?) which I had to strip, so I could make it play along the rest of the tools I was using, including some bespoke ones.

Lots of Lua code was injected into the game in the making of this video, and as I said, this is more for comedy and that sense of pride and accomplishment, rather than any sort of game/mod showcase… but I’m happy to report that besides some minor retiming, color grading and artificial camera shake and pan effects, all shots were achieved in-engine with minor visual effects, other than the two shots involving multiple bikes on screen, that required more trickery.

Then there was the careful recreation of every 2D element in Rockstar’s video, including avatars, icons, text placement, and a hours-long search for fonts whose results I am still not 100% happy with. One of the fonts Rockstar used is definitely Arial, but with a custom lowercase Y… I no longer have those notes, but at one point I could even tell you which font foundry was most likely to have supplied the one in question. And did I mention how I also recreated the song cut Rockstar used, so I wouldn’t have to rely on AI separation with all its artifacts?

I think it was while working on the “mud club” shot that I realized I just wouldn’t be able to recreate everything as precisely as I would like. One idea that crossed my mind was to use the infamous spider tank in place of the monster truck in that shot, but I just wasn’t find an easy way to have the spider tank there with the proper look, while still being able to control my mods. Sure, there were multiple technical solutions for it, but that would have meant spending days/weeks just on those two or so seconds of video. I also wouldn’t have been able to find matching animations for the characters. So I decided to take some shots in a different direction that alludes to the setting of Watch Dogs.

Eventually, I let that creative freedom permeate other points of the video. For example, the original “High Rollerz Lifestyle” shot would have been somewhat easy to recreate (the animations for the main character in it notwithstanding) but I felt I had already proven I could recreate easy shots, so I decided to have some fun with it and instead we ended up with “High Hackerz.” Similarly, the final shot features three protagonists instead of two, because I couldn’t decide which one was the most relevant “second character” in the world of Watch Dogs.

The end result seems to have been received to great acclaim, judging by all the public and private praise I’ve been receiving. There are people asking me to continue making this sort of thing, too, which I am not sure is something I want to pursue, especially not on a regular basis – I think a large portion of the fun I had making this, was precisely because this has a sufficiently closed scope and was sufficiently distinct from what I usually do, and I suspect I would have a worse time making more open-scoped machinima, particularly in this game where the tooling is only “limited but functional.”

There are also people asking for this sort of thing done in Watch Dogs 2 rather than in the first game – but there are even fewer mods for that game, and I have even less knowledge of its internals. Judging by the title of Rockstar’s trailer, it’s likely there will be at least a second trailer, so maybe I can combine the wishes of both sets of people by then. It’s probably not something I’ll feel the drive to do, though – it will also depend on how busy I am with life by the time that second trailer releases.

As I was taking care of the last shots and editing tweaks, I was definitely feeling a bit tired of this project, and subconsciously I probably started taking some shortcuts. Looking back on the published result, there are definitely aspects I wish I would have spent some more time on. There is an entire monologue section missing from the trailer which I can pass off as an artistic decision, but the truth is that I only realized I hadn’t recreated/found a replacement for it after the video was up on YouTube. Similarly, for the effort this took, I wish I had captured the game at a resolution higher than 1080p (my monitor’s vertical resolution), because after going through editing (having to apply cropping, zooming, etc.) the quality of the video really suffers in some aspects. But the relevancy of this meme was definitely dropping by the day as time went on, and if I had spent much more time on it, not only would I have been sick and tired of the entire thing, the internet would also have moved on. It is what it is, and once again similarities are found between art and engineering: compromises had to be made.

One thing is for sure, the next video I publish on my YouTube channel is unlikely to live up to these newfound expectations, and I like to think that I have learned enough to deal with that. Meanwhile, and on the opposite note, I hope that 2024 lives up to all of your expectations. Have a great new year!

Musings about Watch Dogs

Before I start, a word about this website. It has mostly sat abandoned, as having a full-time software development job doesn’t leave me with the comparatively endless amounts of free time and mental bandwidth I once had. What remains, in terms of screen time, is usually spent working on other projects or doing things unrelated to software development that require lower amounts (or different kinds) of mental activity, like playing ViDYAgAMeS, arguing with people on Discord, or mindlessly scrolling through a fine selection of subreddits and Hacker News. While I quite enjoy writing, it’s frequently hard to find something to write about, and while I have written more technical posts in the past – this one about MySQL being the most recent example – these often feel too close to my day job. So, for something completely different, here’s some venting about a video game series – this was in the works for over a year, and is my longest post yet. Maybe this time I’ll actually manage to start and finish a series of blog posts.

Introduction

Watch Dogs is an action-adventure video game series developed and published by Ubisoft and it is not their attempt at an Animal Crossing competitor, unlike the name might suggest. The action takes place in open worlds that are renditions of real-life regions; at the time of writing, there are three games in the series: Watch Dogs (WD1), released in 2014 and set in a near-future reimagination of Chicago; Watch Dogs 2 (WD2), a 2016 game set in a similar “present-day, but next year” rendition of the San Francisco Bay Area, and Watch Dogs: Legion (WDL), a 2020 game set in a… uh… Brexit-but-it’s-become-even-worse version of London. The main shtick of these games, in comparison with others in the same genre, is their heavy focus on “hacking,” or perhaps put more adequately, “an oversimplification, for gameplay and storytelling purposes, of the new delicate information security and societal challenges present in our internet-connected world.”

WD1’s launch menu background is a long video that emulates glitch art (also known as datamoshing) and features key story characters and locations.

The games fall squarely into two categories: “yet another Ubisoft open world game” and what some people call “GTA Clones.” It’s hard to argue against either categorization, but the second one, in itself, has some problems. The three Watch Dogs games came out after the initial release of the latest entry in the Grand Theft Auto series (GTA V in 2013), and GTA VI is yet to be officially announced, so snarky people like me could even say that, if anything, Watch Dogs is a continuation, not a clone, of GTA!

More seriously, there are people on the internet who will happily spend some time telling you how “GTA clone” is a terrible designation that is actually hurting open world games in general, by discouraging developers from making more open world games with a modern setting – and I generally agree with them. But I prefer to attack this “GTA clone” designation in a different way, the childish one, where you point the finger back the accuser and yell “you too!”: GTA Online has, in multiple of its updates, also “cloned” some of the gameplay elements most recently seen in Watch Dogs, and GTA in general has also taken inspiration from different open world games that were released over the years.

“Player Scanner”, a GTA Online novelty. Image source (because I’m too lazy to find a GTA Online session with cooperating players)

In a 2018 update, Rockstar brought a “Player Scanner” to GTA Online, which is reminiscent of the “Profiler” in Watch Dogs games, and in the same update, they also introduced weaponized drones that can be compared to the drone in WD2. More recently, GTA Online received a new radio station whose tracks are obtained from collectibles spread around the world – similar to how the media player track list can be expanded in WD1. I doubt that Watch Dogs was the primary motivation or inspiration for these mechanics, and they were hardly exclusive to Watch Dogs, but the point is that the “cloning” argument can go both ways.

Nowadays, when it comes to open world games, there’s hardly anyone “cloning” a particular game series. Watch Dogs games are GTA competitors, but the same can be said about countless other games, including many that don’t even make use of open world mechanics. None of this negates the fact that, despite not being a “GTA clone,” Watch Dogs ticks all the boxes of said unfortunately named category, for which a better name would totally be “open world games set in a place recognizable as the world we presently live in.” And therefore I won’t hide the fact that many of the comparisons I’ll make will be directly against the two “HD Universe” GTA titles, IV and V, as these are definitely the most well-known and successful games in said category.

I have played through all three games in the Watch Dogs series, on PC. I’m certain I spent more time than the average player in the first two games, having played through both twice, going for the completionist approach the first time I played both of them, and having spent more time than I’d like to admit in the multiplayer modes of WD1 and WD2. By “completionist approach,” I mean getting the progression meter to 100% in the first game, and going for all the collectibles spread around the map in WD2, in addition to completing all the missions. Why? Because, in general, I found their gameplay and virtual worlds enjoyable, regardless of their story or general “theme.”

While players and Ubisoft marketing tend to overly focus on the “hacking” aspect of the series, in my opinion its most distinctive aspect, compared to other open world games, is the fact that more than being a shooter, these can be open world puzzle games, requiring some thought when approaching missions, especially when opting for a stealthier approach. Mainly in the most recent two games, and to some degree in the first one too, there are typically multiple approaches to completing missions, catering to wildly different play styles. This extends even to their multiplayer aspects and adds to the replayability of the games. For example, I went for a mainly “guns blazing” approach on my first WD2 playthrough and settled with a “pacifist” approach when I revisited WD2 for a second time – which, in my opinion, is the superior way to get through the game’s story. But let’s not get ahead of ourselves.

Initially, I was going to write a single post with my thoughts about the three games. As I was writing some notes on what I wanted to say, I realized that a single post would be insufficient – even the individual posts per game are going to be exhaustively long. So I decided to write separate posts, in the order the games have been released, which is also the order I have played them. This post will be about the first Watch Dogs, and the next one will be about its sole major DLC, called Bad Blood.

My notes file for the whole series has over 200 bullet points, so hold on to your seats. Before we continue onto WD1, I just want to mention one more thing: I’m going to assume you have some passing familiarity with the three games, even if you have not played them yourself. I won’t be doing much more series exposition; I mostly want to vent about it, not write a recap. Still, I’ll try to give a bit of a presentation on each thing I’ll talk about, so that those who have played the games before can have a bit of a recap, and so that those who haven’t – but for some reason are still reading this – aren’t left completely in the dark.

Onto what is probably the lengthiest ever rant/analysis/retrospective of WD1. Enjoy!

(more…)

The Appeal To Celebrity Fallacy

“An appeal to celebrity is a fallacy that occurs when a source is claimed to be authoritative because of their popularity” [RationalWiki]

Today I was greeted by this Discord ping:What I want to talk about is only very tangentially related to what you see above, and the result of some shower thoughts I had after reading that. I did not watch the video, and I do not intend to, just like I haven’t watched most of DarkViperAU’s “speedrunner rambles” or most of his other opinion/reaction videos about a multitude of subjects and personalities. My following of these YouTube drama episodes hasn’t gone much beyond reading the titles of DVAU’s videos as they come up on my YouTube subscriptions feed. What I want to talk about is precisely why I don’t watch those videos and why I think that many talented “internet celebrities” or “content creators” would be better off not making them, and/or why the fans who admire them for their work alone would be better off ignoring that type of content.

OK, I was planning on writing a much longer post but I realized that my arguments would end up being read as “reaction videos and YouTube drama are bad and you’re a bad person if you like them”, which is really not the argument that I want to make here. Instead, let me cut straight to the chase:

Just because you admire someone’s work very much,
that doesn’t mean that you must admire its creators just as much,
nor that you should agree with everything they say
(nor that everything they say and do is right),
and the high quality of some of their work does not necessarily make them quality people nor makes all of their work high-quality.

This is one of those things that is really obvious in hindsight. Yet I often find it hard to detach works from their creator, and I believe this is the case for a majority of people, otherwise the “appeal to celebrity” fallacy would not be so common, and there wouldn’t be so many people interested in knowing what different celebrities have to say in areas that have nothing to do with what made them popular and successful in the first place.

This is not a “reaction/opinion pieces are bad” argument. If someone’s most successful endeavor is precisely to be an opinion maker, then I don’t see why they shouldn’t be cherished for that, and their work celebrated for its quality. But should you not like their work, you’re still allowed to like them as a person, and vice-versa.

DarkViperAU is an example of a “newfound internet celebrity” I admire for much of their work but who is progressively also veering off to a different type of content/work (of the “opinion making” type) which, if I were to pay attention to it, could greatly reduce my enjoyment of the parts of his content that I find great. For me, the subject of today’s ping on his Discord was a great reminder of that, and sent me off in a bit of a shower thought journey.

While I am not fond of end-of-year retrospectives – calendar conventions do not necessarily align with personal milestones – 2020 was definitely the most awkward year in recent times for a majority of the world population. It was an especially awkward year for me, as among many other things, it was when I fell into what I’d describe as an “appeal to a celebrity’s work” fallacy. I initially believed I’d really like to work with people who make a project I admire very much, but over the months I found some of their methods and personalities to really conflict with my personal beliefs, and yet, I kept giving my involvement second chances, because I really felt like the project could use my contribution.

In the end, there’s no problem in liking an art piece exclusively because of its external appearance, even if you are not a fan of the materials nor of some of its authors. And if you think you can improve on that piece of art, expect some resistance from the authors, keeping in mind it might fall apart as you attempt to work on it. Sometimes making your own thing from scratch is really the better option: you might be called an imitator and the end result may even fall short of your own expectations, but you’ll rest easy knowing that you have no one but yourself to blame.

On a more onward-looking note, I wish you all the best for the years to come after 2020. I have a new Discord server which, unlike the UnderLX one, is English-speaking and not tied to any specific project or subject. My hope is to get in there those who I generally like to talk to and work with, so we can all have a great time – you know, the typical thing for a generalist Discord server. I know this is an ambitious goal for just yet another one of these servers, but that won’t stop me from trying. My dear readers are all invited to join Light After Dinner.



twenty twenty

time travel terrifyingly trialed: twenty days take twenty months, twenty ticks teared twenty years

I really like Discord. It’s a monster, it scares me

…and it’s also the next Steam.

Dear regular readers: we all know I’m not a regular writer, and you were probably expecting this to be the second post on the series about internet forums in 2018. That post is more than due by now – at this rate it won’t be finished by the end of the year – even though the series purposefully never had any announced schedule. I apologize for the delay, but bear with me: this post is not completely unrelated to the subject of that series.

Discord, in case you didn’t know, is free and proprietary instant messaging software with support for text, voice and video communication – or as they put it, “All-in-one voice and text chat for gamers that’s free, secure, and works on both your desktop and phone.” Launched in 2015, it has become very popular among gamers indeed – even though the service is definitely usable and useful for purposes very distant from gaming, and to people who don’t even play games. In May, as it turned three years old, the service had 130 million registered users, but this figure is certainly out of date, as Discord earns over 6 million new users per month.

If you have ever used Slack, Discord is similar, but free, easier to set up by random people, and designed to cater to everyone, not just businesses and open source projects. If you have ever used Skype, Discord is similar, but generally works better: the calls have much better quality (to the point where users’ microphones are actually the limiting factor), it uses less system resources than modern Skype clients on most platforms, and its UI, stability and reliability doesn’t get worse every month as Microsoft decides to ruin Skype some more. You can have direct conversations with other people or in a group, but Discord also has the concept of “servers”, which are usually dedicated to a game, community or topic, and have multiple “channels” – just like IRC and Slack channels – for organizing conversations and users into different topics. (Beware that despite the “server” name, Discord servers can not be self-hosted; in technical documents, servers are called “guilds”).

Example of Slack bot in action. Image credit: Robin Help Center

Example of Slack bot in action. Image credit: Robin Help Center

Much like in Slack (and, more recently, Skype, I believe), bots are first-class citizens, although they are perhaps not as central to the experience as in many Slack communities. In Discord, bots appear as any other user, but with a clearly visible “bot” tag, and they can send and receive messages like any other user, participate in text in voice chats, perform administrative/moderation tasks if given permission… to sum it up, the only limit is how much code is behind each bot.

Example of Discord bot in action

Example of Discord bot in action. Discord bots can also join voice channels, e.g. to play music.

I was introduced to Discord by a friend in the end of 2016. We were previously using Skype, and Discord was – even at the time – already clearly superior for our use cases. I found the “for gamers” aspect of it extremely cheesy, so much that for a while it put me off of using it as a Skype replacement. (At the time, we were using Skype to coordinate school work and talk about random stuff, and at the time, I really wasn’t a “gamer”, on PC or any other platform). I finally caved in, to the point where I don’t even have Skype start with my computers anymore, and the Android app stays untouched for weeks – I only open it to talk to the two or three people who, despite heavy encouraging, didn’t switch to Discord. It’s no longer the case, but the only thing Discord didn’t have back then was screen sharing, but it was so good that we kept using it and went with makeshift solutions for screen sharing.

As time went by, I would go on to advocate for the use of Discord, join multiple servers, create my own ones and even build a customized Discord bot for use in the UnderLX Discord server. Discord is pleasant to use, despite the fact that it tends to send duplicate messages under specific terrible network conditions – the issue is more prominent when using it on mobile, at least on Android, over mobile data.

Those who have been following what I say on the internet for longer, might be surprised that I ended up using and advocating for the use of a proprietary chat solution. After posts such as this one where I look for a “free, privacy friendly” IM/VoIP solution, or the multiple random forum posts where I complain that all existing solutions are either proprietary and don’t preserve privacy/prevent data collection, or are “for neckbeards” for being unreliable or hard to set up, seeing me talk enthusiastically about Discord might make some heads spin.

I suppose this apparent change of heart is fueled by the same reason why many people, myself included, use the extremely popular digital store, DRM platform (and wannabe Discord competitor… a topic for later) Steam: convenience. It’s convenient to use the same store, launcher and license enforcer for all games and software; similarly, it’s convenient to use the same software to talk to everyone, across all platforms, conversation modes, and topics. It’s an exchange of freedom and privacy for convenience.

Surprise, surprise: it turns out that making a free-as-in-freedom, libre if you prefer, platform for instant messaging that provides the desired privacy and security properties, in addition to all the features most people have come to expect from modern non-free platforms like Facebook Chat or Skype, while being as easy to use as them, is very difficult. Using the existing popular platforms does not involve setting up servers, sharing IP addresses among your contacts, dealing with DDoS attacks against those servers or the contacts themselves, etc. and for an alternative platform to succeed, it must have all that, and ideally be prepared to deal with the friction of getting everyone and their contacts to use a different platform. It was already difficult in 2013 when I wrote that post, and the number of hard-to-decentralize features in the modern chat experience didn’t stop growing in these five years. The technology giants are not interested in developing such a platform, and independent projects such as Matrix.org are quite promising but still far from being “there”. And so everyone turns to whatever everyone else is using.

In my opinion, Discord happened to be the best of the currently available, viable solutions that all my friends could actually use. It is, or was, a company and a product focused on providing a chat solution that’s independent from other products or larger companies, unlike Messenger, Hangouts or Skype, which come with all the baggage from Facebook, Google and Microsoft respectively. Discord, despite having the Nitro subscription option that adds a few non-essential features here and there, is basically free to use, without usage limits – unlike Slack, which targets company use and charges by the user.

List of Discord Nitro Perks in the current stable version of Discord.

List of Discord Nitro Perks in the current stable version of Discord. Discord is free to use, but users can pay $4.99/month or ten times that per year to get access to these features.

What about sustainability, what is Discord’s business model? To me it was painfully obvious that Nitro subscriptions couldn’t make up for all the expenses. Could they just be burning through VC money only to die later? Even by selling users’ data, it wasn’t immediately obvious to me that the service would be sustainable on its own. But I never thought too much about this, because Discord is super-convenient, and alternative popular solutions run their own data collection too, so I just shrug and move on. If Discord eventually ran out of money, oh well, we’d find an alternative later.

Back to praising the product, Discord is cross-platform, with a consistent experience across all platforms, and can be used in both personal/informal contexts and work/formal contexts. In fact, Discord was initially promoted to Reddit communities as a way to replace their inconvenient IRC servers, and not all of those communities were related to gaming. If only it didn’t scream “for gamers” all over the place…

I initially dismissed this insistent targeting of the “gamers” market as just a way to continue the segmentation that already existed… after all, before Discord there was TeamSpeak, which was already aimed at gamers and indeed primarily used by them. By continuing to target and cater to this very big niche, Discord avoided competing head-to-head with established players in the general instant messaging panorama, like the aforementioned Skype, Facebook Messenger and Hangouts, and also against more mobile-centric solutions like WhatsApp or Telegram.

I believed that at some point, Discord would either gradually drop the “chat for gamers” moniker, or introduce a separate, enterprise-oriented service, perhaps with a self-hosting option, although Slack has taught us that isn’t necessary for a product to succeed in the enterprise space. This would be their true money-maker – after all, don’t they say the big money is on the enterprise side of things? Every now and then I joked, half-seriously, “when are they going to introduce Discord for Business?”

I was half-serious because my experience using Discord, a supposedly gaming-oriented product, for all things non-gaming, like coordinating an open source project or working remotely with my colleagues, was superb, better than what I had experienced in my admittedly brief contact with Slack, or the multiple years throughout which I used Skype and IRC for such things. The “for gamers” aspect was really a stain in what is otherwise a product perfectly usable in formal contexts for things that have nothing to do with playing games, and in some situations stopped me from providing my Discord ID and suggesting Discord as the best way to contact me over the internet for all the things email doesn’t do.

These last few days, Discord did something that solved the puzzle for me, and made their apparent endgame much more clear. It turns out their focus on gaming wasn’t just because the company behind Discord was initially a game development studio that had pivoted into online chat, or because it was a no-frills alternative to TeamSpeak (and did so much more), nor because it was an easy market to get into, with typically “flexible” users that know their way around installing software, are often eager to try new things, use any platform their parents are not on, and share the things they like with other players and their friends. I mean, all of these could certainly have been factors, but I think there’s a bigger thing: it turns out Discord is out to eat Steam’s (Valve’s) lunch. Don’t believe me? Read their blog post introducing the Discord Store.

In hindsight, it’s relatively obvious this was coming, in fact, I believe this was the plan all along. It’s a move so genius it must have been planned all along. Earn the goodwill of the gamer community, get millions of gamers who just want a chat client that’s better than what Steam and Skype provide while being as universal as those among the people they want to talk to (i.e. gamers), and when the time is right, become a game store which just happens to have the millions of potential clients already in it. It’s like organizing a really good bikers convention, becoming famous for being a really good bikers convention, and then during one year’s edition, ta-da! It’s also a dealership!

The most interesting part about all this, in my opinion, is that Discord and Steam’s histories are, in a way, symmetrical. Steam, launched in 2003, was created by Valve – initially a game development company – as a client for their games. Steam would evolve to be what’s certainly the world’s most recognizable and popular cross-platform software store and software licensing platform, with over 150 million users nowadays (and this number might be off by over 30 million). As part of this evolution, Steam got an instant messaging service, so users could chat with their friends, even in-game through the Steam overlay. After a decade without major changes, a revamped version of the Steam chat was recently released, and it’s impossible not to draw comparisons with Discord.

The recently introduced Steam Chat UI

The recently introduced Steam Chat UI. Sure, it’s much nicer, and you can and should draw comparisons, but it’s no Discord… yet.

I had the opinion that Steam could ditch its chat component altogether and just focus on being great at everything else they do (something many people argue they haven’t been doing lately), and I wasn’t the only one thinking this. We could just use Discord, whose focus was being a great chat software, and Steam could focus on being a great store. But now, I completely understand what Valve has done, and perhaps their major failure I can point out right now was simply taking too long to draft a reply. Because, on the other, “symmetrical” side of the story…

Discord was developed by Hammer & Chisel, recently renamed Discord Inc., a game development studio founded in 2012, which only released one unsuccessful game before pivoting into what they do now – which used to be developing an instant messaging platform, but apparently now includes developing an online game store too. Discord, chat software that got a store; Steam, a store that got chat functionality, both developed by companies that are or once were into game development. Sadly, before focusing on the game store part of things, Discord, Inc. seems to have skipped the part where they would publish great games, their sequels, and stop as they leave everyone asking for the third iteration.

It is my belief that it was not too long after Discord became extremely successful – which, in my opinion, was some time in 2016 – and a huge amount of gamers got on it, that they set their eyes on becoming the next Steam. It’s not just gamers they are trying to cater to, as they started working with game developers to build stuff like Rich Presence long ago, not to mention their developer portal was always something focused not just on Discord bots, but applications that authenticate against Discord and generally interact with it. This certainly helped open communication channels with some game developers, which may prove useful to get games on their store.

Discord is possibly trying to eat some more lunches besides Valve’s, too. Discord Nitro (their subscription-based paid tier, which adds extra features such as the ability to use custom emoji across all servers or upload larger files in conversations) has always seemed to me as a poor value proposition, but I obviously know this is not the universal opinion, as I have seen multiple Nitro subscribers. Maybe it’s just that I don’t have enough disposable income; anyway, Nitro just became more interesting, as now “It’s kinda like Netflix for games.” From what I understand, it’ll work a bit like Humble Monthly, but it isn’t yet completely clear to me whether the games are yours to keep – like on Humble Monthly – or if it’s more like an “extended free weekend” where Nitro users get to play some games for free while they are in rotation. (Update: free games with Discord Nitro will not be permanent)

This Discord pivot also presents other unexpected ramifications. As you might now, on many networks all game-related stuff (like Steam) is blocked, even though instant messaging and social networks are often not blocked as they are used to communicate with clients, suppliers, or even between co-workers, as is the case with Slack. I fear that by introducing a store, Discord will fall even more into the “games” bucket, and once it definitively earns the perception of being a games-only thing, it’ll be blocked in many work and school networks, complicating its use for activities besides gaming. The positive side of things is that if they decide launching that enterprise version, this is an effective way of forcing businesses to use it instead of the free version, as the “general populace” version will be too tightly intertwined with the activity of playing games.

I’ll be honest… things are not playing out the way I wish they would. Discord scares me because now I feel tricked and who knows what other tricks they have up their sleeve. I would rather have an awesome chat and an awesome store, provided separately, or alternatively, an awesome chat and store, all-in-one. (And if the Discord team reads this, they’ll certainly say “but we’re going to be the awesome chat and store, all-in-one!”) But at this rate, we’ll have two competing store-and-chat-platforms… because we didn’t have enough stores/game clients or instant messengers, right?

Fortunately, the charging one has been solved now that we've all standardized on mini-USB. Or is it micro-USB? Shit.

Because of course this one had to be here, right? I could also have added a screenshot of Google’s IM apps, but I couldn’t bother finding screenshots of all of them, let alone installing them.

You can of course say, “just pick one side and your life will be simpler”, but we all know this won’t be the case. Steam chat is a long way from being as good as Discord, and the Discord store will certainly take its time to be a serious Steam competitor. Steam chat will never sound quite right for many of Discord’s non-officially assumed use cases; for example, even if Steam copies all of Discord’s features and adds the concept of servers/guilds, it’ll never sound quite right to have the UnderLX server on Steam, will it? (Well… unless maybe UnderLX pivots into something else as well, I guess). Similarly, I’ll be harder to “sell” Discord’s non-gaming use cases by telling people to ignore the “for gamers” part, as I’ve been doing, if Discord is blatantly a game store and game launcher.

Of course I’ll keep using Discord, but I’ll probably not recommend it as much now, and of course I’ll keep using Steam, and mostly ignoring its chat capabilities – even because most people I talk to are not in there, and most of those that are, are also on Discord. But for now, I’ll keep the games tab on Discord disabled, and I seriously hope they’ll keep providing an option to disable all the store/launcher stuff… so I can keep hiding the monster under the bed.

Sparkling Gambol One™

“We’re adding another dimension to computing.”
Cliché and meaningless, but go on. I guess this is something revolutionary

“Where digital respects the physical.”
Because, currently, digital somehow violates the laws of the universe?

“And they work together to make life better.”
“they”? who’s “they”? Oh, digital and physical, sorry. You really should learn to use commas instead of periods. So the digital and physical work together, uh? My guess is that this is about a robot.

“Magic Leap One is built for creators who want to change how we experience the world.”
Finally, now you’ve told me who it is for, and it probably helps to “change how we experience the world”, because it’s for creators who want to do that.
You still haven’t told me what it is or what it does, and bonus points for jamming the “creators” cliché in there! Congratulations, you have passed your final assignment in Unicorn University :unicorn: of Shitty Vaporware Descriptions :poop: with the grade of: flying colors :sparkles: !

My programming experience

Go to the bottom, “Summing it up”, for the TL;DR.

The day I turn this website into a portfolio/CV-like thing will come sooner or later, and arguably that’s a better use for the domain gbl08ma.com than this blog with posts nobody cares about – except when I rant about new operating systems from Microsoft. But if you really care about such posts, do not worry: the blog will still exist, it just won’t be as prominent.

Meanwhile, and off-topic intro aside, the content usually seen on such presentation websites everyone-and-their-cat seems to have these days, will have to wait. In anticipation for that kind of stuff, let’s go in a kind of depressing journey through my eight years programming experience.

The start

The beginning was what many people would consider a horror movie: programming in Visual Basic for Applications in Excel spreadsheets, or VBA for short. This is (or was, at the time; I have no idea how it is now) more or less a stripped down version of VB 6 that runs inside Microsoft Office and does not produce stand-alone executables. Everything lives inside Office documents.

Screenshot (30)

It still exists – just press Alt+F11 in any Office window. Also, the designer has Windows 7 Basic window styles… on Windows 10, which supposedly ditched all that?

I was introduced to it by my father, who knows his way around Excel pretty well (much better than I will probably ever will, especially as I have little interest). My temporal memory is quite fuzzy and I don’t have file timestamps with me for checking, so I was either 9, 10 or 11 years old at the time, but I’m more inclined to think 9-10. I actually went quite far with it, developing a Excel-backed POS system with support for costumer- and operator-facing character LCD screens and, if I remember correctly, support for discounts and loyalty cards (or at least the beginnings of it).

Some of my favorite things I did with VBA, consisted in making it do things it was not really designed for, such as messing with random ActiveX controls and making it draw strange-looking windows (forms) and controls through convoluted Win32 API calls I’d have copied from some website. I did not have administrator rights to my computer at the time, so I couldn’t just install something better. And I doubt my Pentium III-powered computer, already ancient at the time (but which still works today), would keep up with a better IDE.

I shall try to read these backup CDs and DVDs one day, for a big trip down the memory lane.

Programming newb v2

When I was 11 or 12 I was given a new computer. Dual core Intel woo! This and 2GB RAM meant I could finally run virtual machines and so I was put on probation: I administered the virtual computers, and soon the real hardware followed (the fact that people were tired of answering Vista’s UAC prompts also helped, I think). My first encounter with Linux (and a bunch other more obscure OS I tried for fun) was around this time. (But it would take some years for me to stop using Windows primarily.)

Around this time, Microsoft released the Express (free) editions of VS 2008. I finally “upgraded” to VB.NET, woo! So many new things to learn! Much of my VBA code needed changes. VB.Net really is a better VB, and thank Microsoft for that, otherwise the VB trauma would be much worse and I would not be the programmer I am today. I learned much about the .NET framework and Visual Studio with VB.NET, knowledge that would be useful years later, as my more skilled self did more serious stuff in C#.

In VB.NET, I wrote many lines of mostly shoddy code. Much of that never saw the light of day, but there are some exceptions: multiple versions of Goona Browser made their way to the public. This was a dual-engine web browser with advanced UI, and futuristic concepts some major players copied, years later.

How things looked like, in good days. Note the giant walls of broken English. I felt like "explain ALL the things"!

How things looked like, in good days (i.e. when it didn’t crash). Note the giant walls of broken English. I felt like “explain ALL the things”! And in case you noticed the watermark: yes, it was actually published to Softpedia.

If you search for it now, you can still find it, along with its website which I made mostly from scratch. All of this accompanied by my hilariously broken English, making the trip to the past worth its weight in laughs. Obviously I do not recommend installing the extremely buggy software, which, I found out recently, crashes on every launch but the first one.

Towards the later part of my VB.NET era, I also played a bit with C#. I had convinced myself I wanted to write an operating system, and at the time there was a project called COSMOS that allowed for writing (pretty limited) OS with C#… of course my “operating” systems were not much beyond a fancy command line prompt and help command. All of that is, too, stored in optical media, somewhere… and perhaps in the disk of said dual-core computer. I also studied and modified open source programs made in C# (such as the file downloader described in the Goona Browser screenshot) for my own amusement.

All this happened while I developed some static websites using Visual Web Developer Express as editor. You definitely don’t want to see those (mostly never published) websites, but they were detrimental to learning a fair bit of HTML and CSS. Before Web Developer I had also experimented with Dreamweaver 8 (yes, it was already old back then) and tried my hand at animation with Flash 8 (actually I had much more fun using it to disassemble existing SWFs).

Penguin programmer

At this point I was 13 or so, had my first contact with Linux more than done, through VMs and Live CDs, aaand it happened: Ubuntu became my main OS. Microsoft “jail” no more (if only I knew what a real jailed platform was at the time…). No more clunky .NET! I was fed up with the high RAM usage of Goona Browser, and bugs I was having a hard time debugging, due to the general code clumsiness.

How Ubuntu looked like when I first tried it. Good times. Canonical, what did you do?

How Ubuntu looked like when I first tried it. Good times. Canonical, what did you do?

For a couple of years, in terms of desktop development, I only made some Python scripts for my own amusement and played a very small bit with MonoDevelop every time I missed .NET. I also made a couple Lua scripts for Rockbox. I learned much about Linux usage and system maintenance as I used it more and more on my own computers and on my first Virtual Private Servers, which I got after much drama in the free web hosting communities. Ugh, how I hate CPanel.

It was around this time that g.ro.lt and n.irc.su appeared. g.ro.lt was a URL shortener that would later evolve into 4.l.to and later tny.im. n.irc.su was a social network built on Elgg, which obviously failed. I also made some smaller websites, like one that would take you to random image hosting websites, URL shorteners and pastebins, so you would not use the same service every time you urgently needed one. These represented my first experiences with PHP programming.

I have no pictures to show. The websites are long gone, not on the Internet Archive, and if I took screenshots, I have no idea where I put them. Ditto for the logos. I believe I still have the source code for the random-web-service website somewhere, at least the front page layout.

All this working on top of free stuff: free (and crappy) subdomains, free (and crappy) web hosting, free (and less crappy) virtual servers. It would take me some time until I finally convinced myself I needed to spend some money for better reliability, a gist of support and less community drama. And even then I would spend Bitcoin, which I earned back when it was really cheap, making the rounds of silly faucets and pulling money out of CPAlead-like offers through the use of multiple proxies (oh, the joy of having multiple VPS…). To this day I still don’t have a PayPal account.

This time, and when I actively developed tny.im (as opposed to just helping maintain it), was the peak of my gbl08ma-as-web-developer phase. As I entered and went through high school, I would get more and more away from HTML and friends (but not server maintenance), to embrace something completely different…

Low level, little resources: embedded systems

For high school math everyone had to use a graphing calculator. My math teacher recommended (out of any interest) Casio calculators because of their ease of use (and even excitedly mentioned, Casio leaflet in hand, the existence of a new and awesome color screen model that “did everything and some more”). And some days later I had said model in my hands, a Casio fx-CG 20, or Prizm, which had been released about a year before. The price difference from the earlier dot-matrix screen Casio calcs was too small to let the color screen go.

I was turning 15, or had just turned 15. I remember setting up the calculator and thinking, not much after, “I want to code for this thing”. Casio’s built-in Basic dialect is way too limited (and after having coded in “real” languages, Basic was silly). This was in September 2011; in March next year I would be releasing my first Prizm add-in, CGlock, a calculator PIN-locking software.

Minimalist look, yay! So much you don’t even notice it’s a color screen.

This was my first experience with C; I remember struggling with pointers, and getting lots of compilation warnings and errors, and run-time errors. Then at some point everything just “clicked in” and C soon became my main language. Alas, for developing native software for the Prizm, this is the only option (besides using C++ without most of its features, not even the “new” keyword).

The Prizm is a horrible platform, especially for newbie C programmers. You can’t use a debugger, nor look at memory contents, the OS malloc/free implementation has bugs (and the heap is incredibly small, compared to the stack) and there’s always that small chance some program damages your calculator, or at least corrupts your estimated files and notes. To this day, using valgrind and gdb on the desktop feels to me as science fiction made true. The use of alloca (stack allocation) ends up being preferred in relation to dynamic allocation, leading to awkward design decisions.

Example of all the information you can get about an error in a Prizm add-in. It’s up to you to go through your binary (and in some cases, disassemble the OS) to find out what these mean. Oh, the bug only manifests itself when compiling with optimizations and without symbols? Good luck…

There is a proprietary emulator, but it wasn’t designed for software development and can’t emulate certain things. At least it’s better than risking damage to expensive hardware. The SuperH-4 CPU runs at 58 MHz and add-ins have access to about 600 KiB of memory, which is definitely better than with classic z80-powered Texas Instruments calculators, but one still can’t afford memory- or CPU-intensive stuff. But what you gain in performance and screen resolution, you lose in control over the hardware and the OS, which still have lots of unknowns.

Programming for the Prizm taught me how it’s like to work without the help of the C standard libraries (or better, with the help of incomplete and buggy standard libraries), what a stack overflow looks like (when there’s no stack protection), how flash memories work, what DMA is, what MMUs do and how systems can be bricked when their only bootloader is not read-only. It taught me how compilers work from an end-user perspective, what kind of problems and advantages optimizations introduce, and what it’s like to develop parts of the C standard library.

It also taught me Casio support in Portugal (Ename) is pretty incompetent at fixing calculators, turning my CG 20 into a CG 10 and leaving two big capacitors out of a replacement main board. In this hardware topic, I learned quite a bit about digital logic from Prizm hardware discussions at Cemetech. And I had some contact with SH4 assembly and a glimpse into how to use IDA Pro. Thank you Casio for developing a system that works so well and yet is so broken in so many under-the-hood ways, and thank you Cemetech for briefly holding the Prizm higher than TI calcs.

I developed other add-ins, some from scratch and others as ports of existing PC software (such as Eigenmath). I still develop for the Prizm from time to time, but I have less and less motivation as the homebrew community has stagnated and I use my Prizm much less, as I went to university. Experience in obscure calculator platforms does not make for a nice CV.

Yes, in three years or so I went from the likes of Visual Studio to a platform where the only way to debug is to write text to the screen. I still like embedded and real-time programming a lot and have moved to programming more generic and well-known things such as the ESP8266.

Getting in the elevator

During the later part of high school (which I started in the fall of 2011 and ended in the summer of 2014), I did more serious Python stuff, namely Mersit, later deprecated in favor of Picored, which is not written in Python but in Go. Yes, I began trying higher-level stuff again (higher level, getting in the elevator… sorry, I’m bad at jokes).

My first contact with Go was when I was 17, because I wanted to develop something that ran without external dependencies (i.e., unlike Java or .NET) and compiled to native code. I wanted to avoid C/C++, but I wasn’t looking for “a better C” either, so Rust was not it. Seeing so much stuff about Go at Hacker News, one day I decided to try my hand at it and I like it quite a lot – I’m still unsure if I like it because of the language itself or because of the great libraries one can use with it, but I think both play an important role.

GolangPicoRed

This summer I decided to give C# another chance and I’m quite impressed – turns out I like it much more than I thought. It may have something to do with trying it after learning proper languages vs. trying it when one only knows VB. I guess my VB.NET scars are healed. I also tried a bit of Java, in my first contact with it ever, and it seems my .NET hate converted into Android API hate.

Programming with grades

University gave the opportunity (or better, the obligation) of having other people criticize my code. The general public could already see the open-source C code of my Casio Prizm add-ins, and even the ugly code of Goona Browser, but this time my code was getting graded. It went better than I initially thought – I guess the years of experience programming in different languages helped, especially as many of the people I’m being compared with have only started programming this year.

In the first semester we took an introductory programming course, which used Python, and while it was quite easy for me, I took the opportunity to learn Python to a greater depth than “language in which to write quick and dirty glue code”. You see, until then I had not used classes in my Python code, for example. (This only goes to show Python is a versatile language, even if slow.)

We also took an introductory computer architecture course where we learned how basic CPUs work (it was good for gluing all the separate knowledge I already had about it) and programmed in assembly for a course-specifc CISC-like architecture. My previous experience with reading SH4 assembly proved quite useful (and it seems that nowadays the line between RISC and CISC is more blurred than ever).

In the second semester, I had the opportunity to exercise my C knowledge, this time not limited to the Prizm platform. More interestingly, logic programming, a paradigm I had no intention of ever programming in, was presented to us. So Prolog it was. It went much better than I anticipated, but as most other people who (are forced to) learn it, I have no real use for it. So the knowledge is there, waiting for The Right Problems(tm). I am afraid I’ll forget much of it before it becomes useful, but if there’s something picking C# up again taught me, is that I can pick up pretty fast skills learned and abandoned long ago.

The second year is about to begin and there’s some object-oriented programming coming, I hope I do well.

Summing it up

I have written non-trivial amounts of code in at least 8 languages: Visual Basic, PHP, C#, Python, Lua, C, Go, Java and Prolog. I have contacted with two assembly dialects and designed web pages with HTML, CSS and Javascript, and of course automated some tasks with bash or plain shell scripting. As can be seen, I’m yet to do any kind of functional programming.

I do not like “years of experience” as a way to measure language proficiency, especially when such languages are learned for use in short-lived side projects, so here’s a list with an approximate number of lines of code I have written in each language.

  • C: anywhere between 40K lines and 50K lines. Call it three years experience if you will. Most of these were for Prizm add-ins, and have since been rewritten or heavily optimized. This is changing as I develop less and less for the Prizm.
  • PHP: over 15K lines, two years if you want to think that way. The biggest chunk of these were for developing the additions to YOURLS used in tny.im, but every other small project takes its own 200-500 lines of code. Unfortunately, most of this is “bad” code, far from idiomatic. The usual PHP mess, you know.
  • Python: at least 5K lines over what amounts to about six months. Of these, most of the “clean” lines (25-35%) were for university projects.
  • Go: around 7K lines, six months. Not exactly idiomatic code, but it’s clean and works well.
  • VBA: uh, perhaps 3 or 4K lines, all bad code 🙂
  • VB.NET: 10K lines or so, most of it shoddy code with lots of Try…Catch to “fix” the problems. Call it two years experience.
  • C#: 10K lines of mostly clean and documented code. One month or so 🙂
  • Lua: mostly small glue scripts for my own amusement, plus some more lines for use in games such as Minetest, I estimate 3-4 K lines of varying quality.
  • Java: I just started, and mostly ported C# code… uh, one week and 1.5K lines?
  • HTML, CSS and JS: my experience with JS doesn’t go much beyond what’s needed to modify DOM elements and make simple AJAX requests. I’ve made the frontend for over 5 websites, using the Bootstrap and INK frameworks.
  • Prolog: a single university assignment, ~250 lines or one month. A++ impression, would repeat – I just don’t see what for.

In addition to all this, I have some experience launching the programs and services I make – designing logos/branding, versioning, keeping changelogs, update instructions, publishing, advertising, user support. Note that I didn’t say I’m good at any of these things, only that I have experience doing them, for better or worse…

Things I’d like to have more experience with:

  • Continuous integration / testing in general;
  • Debugging code outside of .NET/Visual Studio and printing debug lines in C;
  • Using Git and other VCS in big repos/repos with more people (I want to see those merge conflicts and commits to the wrong branch coming);
  • Server-side web development on something other than PHP and Go. And learning to use MVC frameworks, independently of the language;
  • C++ (and Java, out of necessity. Damned Android);
  • Game development. Actually, this is how many people start, but I’m so cool that I started by developing POS software 🙂

Windows 10 is pretty good

After yesterday’s popular post Windows 10 is unfinished, where I bashed said OS, today, I’m going to praise Windows 10 (where possible). This is so we can keep with the opinion diversity people are now accustomed to seeing on the Web, faithfully satisfying the thousands of Reddit and Hacker News users who can’t skip a beat on hot technology topics and especially, hot discussions on those topics.

A lot of people took my post as my definitive opinion on the matter and also as if I was telling some universal truths, and mistakenly concluded that I only had negative things to say about Microsoft’s latest big release. Others were saying I focused on the wrong problems; that the design issues were minor nitpicks, and effectively they are, when compared to the functionality problems (which I’m also having, but apparently that part was overlooked). My intention was not to write a fanboy post nor to start flamewars, and that’s the case with this post too.

Yesterday’s post was written from start to end on my Windows 10 tablet, without hardware keyboard (yes, it was painful, but not as much as it would have if using an Android tablet with similar characteristics), including screenshots and image editing (MS Paint FTW!). That’s not the case with today’s post, that was written with my laptop, because Microsoft is yet to issue an update to fix the virtual keyboard in Windows 10. The OS it is running doesn’t matter; let’s just say I’m writing this in MS-DOS 6.0’s edit.

Let the deserved Windows 10 appraisal start.

Upgrade process

I upgraded from Windows 8.1, before Microsoft decided it was ready for me to install it. Yes, I forced the download and installation process. I wanted to get it downloaded before the end of July, so that it would not count towards this month’s data cap. I wanted to get it installed because I thought it would have tons of updates to download in the first days (not the case), and also because I’m going to need this tablet operational by September when university classes begin, so I thought I better get used to it and point out all mistakes sooner rather than later.

Yes, I could have stayed for another year on 8.1 before losing the option to upgrade for free, but I’m also interested in developing Universal Apps, so here’s that.

Despite me rushing the update and the tablet having 32 GB of storage of which only 22 GB are for the Windows partition, the process went perfectly, and apparently I still have the option to go back to 8.1 if I wish (at the expense of only having 2 GB of free disk space on C:). All data and apps were kept, except f.lux, possibly because (as far as I could understand when uninstalling its remnants) it was installed in AppData (note that AppData is mostly kept, too, but f.lux in particular wasn’t).

From leaving Windows 8.1 to seeing Windows 10 desktop it took my tablet about a hour and half. The flash storage on it is not especially fast (definitely not a SSD), which probably explains why most people can do it in one hour.

All points taken into account, the upgrade process went surprisingly well and was fast, as appears to be the case with the majority of users. Much better than ending up with a system that doesn’t boot at all, or with driver issues (which some users are still having), which as far as I remember were popular problems in previous versions’ in-place upgrades. Also, kudos to Microsoft for making it work on devices with such a limited amount of system storage.

Initial setup

There was the first-run setup, where the polemic privacy defaults are located (I disabled almost everything), but the most complicated part is what comes when the system finishes installing. In my case, Windows understood this was a tablet and accordingly selected tablet mode automatically. Because on 8.1 I basically only used the desktop, and because I thought it would be easier to find most settings on the desktop mode, I immediately went looking for the switch and since then I have only used desktop mode.

The desktop mode still works very well with touch screens; I have gone back to tablet mode for five minutes just to check it out, but went back quite fast, as I deemed the desktop good enough. Tablet mode didn’t fix the problem of the touch keyboard appearing over other windows even when docked, which would have been its major selling point for me right now.

Windows 8’s modern apps were kept from the previous version, including the MSN-powered apps such as Travel, which have been discontinued and will stop working in September. Of course, those who have an Universal app replacement (Mail, Calendar, Twitter, Maps, possibly more) are replaced. In the case of Mail and Calendar, it remembered the previously added account, but I had to pair them again in the case of Google and Microsoft accounts, and re-insert credentials for IMAP accounts.

OneDrive apparently now refuses to have its folder out of the C: drive, or perhaps that’s only a problem when the folder you want to chose is on a removable drive. I solved this problem by mounting the SD card, where I had the OneDrive folder, on the C: drive (NTFS mountpoints FTW!), then pointing OneDrive to this mountpoint. Yes, I know what I’m doing and you should too. This SD card, unlike what Windows thinks, is never removed.

I also had to download desktop Skype. Before I was using the Modern UI version of Skype, which was discontinued some time ago. But the desktop version uses so much RAM and is less touchscreen friendly, making it one of the most annoying parts of my Windows 10 experience. It also doesn’t update with new messages during Connected Standby, which is a thing my tablet has and I’m going to talk about later, and it doesn’t put its notifications in the new Action Center, either.

Tablet usage

People are saying the tablet experience has actually gone worse with Windows 10, but to be honest if they fixed the touch keyboard I’d say it is as good as Windows 8. Of course, if you are used to the charms bar and to the gesture of “swiping down an app” to close it, you’ll be out of luck:

  • swiping from the top on a window does nothing except move or restore it (if it was maximized);
  • swiping from the left opens the Action Center (where some handy, more or less configurable shortcuts are located, so you won’t miss the “Settings” part of the charms bar);
  • swiping from the right shows the task view, where you can switch apps and desktops;
  • sadly there’s no longer a way to bring up a big clock, even when running full-screen stuff (games, videos…), something the charms bar was good for.

As it’s been widely reported, now Universal apps, Windows 8 apps and “normal” software made for the Win32 API all work together, with the same window borders and titles and showing on the same task lists. If only it had been this way since the beginning, Windows 8 would not have received so much negative critique and “Modern apps” could have actually been more used. Yes, I believe windows are adequate even for tablet devices (and not just by putting two windows side-by-side), and that is certainly one of Windows differentiating factors in the world of tablet OS.

Resource usage

I still can’t comment much on this part, because I’m having some issues with my Voyo A1 Mini that look not like Windows fault but driver problems. The “System” process (i.e., the NT kernel) is often using multiple MBs of RAM. I know I’m not the only user with this problem; there is at least one known bad network driver, but I don’t use it. I’ve also seen suggestions for disabling the network device usage service, but in my case that didn’t help. The result is that it always has 90-95% of physical memory used, and the commit charge at something like 3 GB of 3,9 GB.

I have also noticed search indexing stuff has gone more aggressive again on Windows 10, after being mostly quiet on 8.1 (as far as I could see). But since I haven’t done any serious monitoring, this could be just my impression.

The update could also have damaged the special CPU throttling set up for this device, given that it now runs much more hot than before, even for the same typical load. It appears the CPU (Intel Baytrail) works at higher frequencies more often – just a slight load and there it goes to 1,55 GHz or so (the “announced speed” of the CPU is 1,33 GHz). I have updated to the latest DPTF (Intel’s thermal stuff) drivers and it reduced the problem a bit, but it’s still present.

Now, this isn’t all that bad, given that Windows is very responsive even with the CPU at 75 degrees Celsius and 95% of the physical memory used. Let’s just wait for updates, both for Windows and for drivers, before taking more conclusions.

Connected Standby is still annoying

My tablet supports Connected Standby. On Windows 8, it was more or less like suspending the computer, but Windows Store apps could still run in the background to perform small tasks, and if you were playing media in such an app, it would keep playing even with the screen off – just like with Android devices.

The problem is if you want to use something other than a Windows Store app (read: 99,9% of the software available for Windows) to play music, or download files, or if you want to watch YouTube with something other than IE’s Modern UI mode. Windows will just suspend desktop apps and they will stop playing, or downloading, or crunching numbers. What makes this really annoying is that there is no way to turn off the screen without entering Connected Standby. So it’s burning extra battery and, at night, our eyes too.

In Windows 10, Connected Standby is more or less the same thing. I hoped that with Windows 10 they would add an option to be able to white-list certain “old fashioned” (Win32) apps into running during connected standby, or alternatively, a way to turn off the screen without going into standby.

At least, the “Sleep” and “Turn off the screen” settings now seem a bit better decoupled, and with my current settings (turn off screen after 2 minutes, sleep after 4) there is a bigger delay between when the screen turns off and the music stops playing. During this delay one can tap the screen and it will turn back on, instantly. Just like with a normal laptop that turns off the screen after a while. Let’s just hope Microsoft doesn’t consider this to be a bug and doesn’t “fix” it.

Cortana

I can’t comment much on the Cortana feature itself, but I can comment on the stuff surrounding Cortana and whether the feature is enabled or not. Here, Windows is set up with a system language of US English. The region was set to Portugal, and the time and date and formatting settings to Portuguese. I was told by a friend I had to set my region to US for Cortana to become available, and that’s indeed true.

I just don’t understand, if Cortana is going to speak in English anyway (because that’s the system language), why does it have anything to do with the region. Unless it is expecting to change the language it uses depending on the region setting, and not depending on the language I want to see (and hear) stuff in. Oh well.

Finally, I have watched Cortana tell me how awesome are all the things that can be done with this feature, but I didn’t enable it because of the privacy policy, and I don’t think I’d use the functionality enough to be worth yet another “I agree” on a privacy setting. I can always turn it on later.

Feedback

Microsoft seems really interested in listening to what the users have to say, so there’s a dedicated feedback app and everything. Unfortunately, this app filters content by region instead of filtering by language, which limits what reviews you can see and upvote. I wonder if anyone from Microsoft will look at the feedback of less populous countries like the one I live in, and even smaller ones.

Microsoft also seems really interested in learning how people use the OS, so much that only Enterprise users can completely disable this kind of feedback. Privacy concerns aside, I really hope the data generated with these feedback tools won’t be used as motivator or justification for taking away even more features and customization ability.

Rolling release

I always wanted to move to a rolling release Linux distro, but I’m yet to make the move; it appears I switched to a rolling Windows release before I did the same with Linux! I actually think it is a very good idea to stop releasing major versions and put new things out in a more continuous way. Major upgrades are a hassle, even when the upgrading itself takes just one hour – first, a giant download, then having to wait while the Windws upgrades and reboots multiple times, then having to set so many little settings that are new or changed with the new version…

I would be even happier if every user had the ability to refuse or at least delay certain updates (even because of, say, known driver and software incompatibility issues). The way things are done right now, only makes the whole thing look like a giant Microsoft-controlled botnet and by paving the way to Windows-as-a-service, makes people fear a future where you’ll pay for Windows by the month (and perhaps by the window/app/user?).

Finally, it’s about time Microsoft finds an ingenious way around the way file handles work in Windows, such that system files can be replaced without rebooting the system. Or at least, they could make the reboots less disrupting, for example by “suspending” the apps before the reboot, then restoring them.

Conclusion

My conclusion is to sit and wait. Windows 10 is actually pretty good for what feels like the end result of a development cycle damaged by setting a release date way too early. It should have been ready when it was ready, but I understand Microsoft not wanting to deal with another “XP to Vista” situation, where it took five years to release a new OS version with an abandoned revolutionary version in between, and a shitty end result. This way, the most people can say is that it’s shitty, but at least it came on time.

If you are using Windows 7 on a desktop and are happy with it, or using 8.1 on a tablet, I don’t think you have much to gain by upgrading now, unless you desperately want to use Cortana. People using Windows 8.1 without a touchscreen may find more value in upgrading now, especially if they use Modern UI apps and are annoyed by the context switches between them and the desktop.

Anyway, I always wanted to try Longhorn in its unstable and unpolished state, and now here is an opportunity – not with Longhorn, but with another revolutionary Windows version that while stable, has its own big polishing needs. But we already talked about that…

Windows 10 is unfinished

Windows 10 came out some hours ago, and, surprise surprise, it’s unfinished! I can’t complain about the system stability (even though the Windows Reliability History tells me there have been some errors happening in the background), but the RAM usage has gone up when compared to 8.1. On a device with just 2 GB of RAM, this matters, but not nearly as much as what’s coming next…

What’s worse is really the touch experience – ruined, compared to 8.1. Imagine the touch keyboard no longer docks properly, which means 90% of the time the cursor is behind the keyboard, and I can’t see what I’m writing (I can’t believe nobody complained about this in the previews!). Then there’s the ultra-invasive privacy settings defaulting to on, which I disabled on the first run setup, but apparently, some choices were ignored – for example, I disabled error reporting, and when later I went to check, found it enabled in its highest level.

Windows 10 still suffers from many of the problems of Windows 8 in terms of UI inconsistency. The void between the “modern” UI and the classic desktop is greatly reduced, with Modern apps and Universal apps running windowed just like all other software. But things are far from perfect.

Microsoft didn’t quite manage to get rid of legacy design paradigms, and the OS still speaks at least three different design languages: if you look carefully, you’ll see elements that would fit better in Windows 7, others that are the continuation of the “modern UI” design, and things that would really fit better in XP and earlier (like the small, tabbed setting dialogs reachable from the legacy Control Panel).

There are still two control panels, with certain things only accessible in one of them, and others available in both but with different names for the same thing (or the same thing, but negated, as is the case with screen rotation lock – in some places, “on” means “do not rotate”; in others it means “allow rotation”).

Screenshot (8)

At least, there are now some more links between the two settings panels, but sometimes Windows will just tell you “This setting is now on …” without actually taking you there.

Depending on where you right-click (and, for certain things, how the planets are aligned) you can open at least four different styles of context menu.

Both Windows 8 and 8.1 were, even despite their messy paradigms and inconsistent styles, more polished in terms of looks than Windows 10. Windows 10 has an incomplete icon set, with many icons yet to be updated to the new design. The fact that the icons are very different from those of 7 and 8 (the icon change from 7 to 8 was much more subtle) only makes the problem worse. You really don’t need much effort to find icons yet to be updated.

Leaving design aside, we can see that they tried to remove some functionality, like Windows Update, from the legacy Control Panel. But the migration transmits a feeling of incompleteness:

Many settings are duplicated in the Settings app and in the Control Panel. But it’s often not a 1:1 relation: to uninstall modern apps, for example, you must go through the Settings app. Going through the old Programs and Features won’t show these apps.

Certain things were renamed – the “Action Center” is the new notification center of Windows 10 (which is a really appropriate name, and what the Action Center should have been since the beginning). If you are looking for the old thing, it still exists:

There are at least two ways to add devices, with different UI flows. Also note the lack of padding on the icon of the window to the right:

The sometimes useful Math Input Panel is still stuck in the past of Windows Vista or 7, with obvious readability problems in the menu:

Then there are gems like this dialog, that depending on from where it is opened, shows different items (possibly not exclusive to Windows 10):

The first non-preview release of Windows 10 still contains too many rough edges and suffers from a lack of attention to detail I was only used to seeing in older Windows’ preview releases. I say “first non-preview release”, because as Microsoft is switching to a rolling release model, it no longer makes much sense to call this a “final release”.

Intentionally or not, Microsoft pushed the quality assurance process to the final user. For what is supposedly the best Windows ever made, I’m not impressed. Thank God I didn’t pay for it (even though it’s for sale, and it’s not cheap).