On paid public beta testing

Important: This post was initially drafted in September 2023. I completely forgot to finish it by proofreading it and giving it a better conclusion. Rather than letting the work go to waste, I decided to post it now with minimal editing. The specific situation that prompted its creation has long passed, but I think the general thoughts and concerns are still applicable. While it is not relevant to the arguments being made, I’ll mention that since then, in mid-2024, I did finally buy and play through Phantom Liberty, which I thoroughly enjoyed.

The recent 2.0 update to Cyberpunk 2077, and near-simultaneous paid DLC release, made me reflect on the redemption arc that game has gone through. Just like record sellers like GTA V and Minecraft, it can’t possibly ever please all audiences, but I consider Cyberpunk to finally be in a state where it mostly warrants the massive marketing and hype it received leading to its release, which happened in December of 2020 – nearly three years ago. While I think the game had already become worth its asking price quite some time ago – the developers, CDPR, issued multiple large patches over the years – this recent 2.0 update introduces so many changes, from small details to large overhauls of the gameplay aspects, that it makes the 1.x series look like beta versions of a game that was two thirds into its development cycle.

In a world where so many software publishers have moved towards continuous releases with no user visible version numbers, or versioning that is little more than a build number, and considering that live service games are one of the big themes in the industry, it’s almost shocking to see a triple-A game use semantic versioning properly. And bringing semver to the table isn’t just a whim of a software developer failing to come up with good analogies: in semver, you increase the major version when you introduce incompatibilities, and 2.0 introduces quite a few of them. 2025 editor’s note: CDPR has started/resumed using “creative” minor patch numbers which I am not 100% sure strictly follow semver.

In technical terms, CDPR dropped compatibility for an older generation of consoles, and raised the minimum requirements on PC (although the game generally appears to run smoother than before, at least on hardware that was already above the recommended spec). But the possible incompatibilities extend to the flesh sitting in front of the screens: there have been major changes to character progression, balancing of different combat options, introduction of new gameplay features like vehicle-oriented combat and, although I haven’t been able to confirm this, perhaps even small changes to the consequences/presentation of some story choices. So players that were already comfortable and conformed with the way 1.x played, may be unhappy, or at least temporarily uncomfortable, with the changes made for 2.0. Personally, I am still on the fence about some changes, but to be honest, I haven’t explored this new version for more than three or so hours yet. 2025 editor’s note, hundreds of hours later: I can not presently mention any change I am not happy with.

There is no doubt that 2.0 is a major overhaul of CP2077, and I am convinced that overall, the changes were for the best, the game is a clearly better and more complete product now. This doesn’t mean there aren’t still more improvements that could be made. Furthermore, this game’s past cannot be changed, meaning that all the material needed for pages-long essays and hours-long “video essays” will forever exist. After all, there is a strong argument that the game took almost three years to be at the minimum level it should have been at release, considering the massive marketing campaigns and all the “half promises” made along the way. However, I strongly believe the game ended up becoming a better product than if its launch had been smooth; the space for the development of a 2.0 version, one that can afford to introduce this many changes to the gameplay and still be received positively, likely would not have been there if Cyberpunk had been just yet another “mildly disappointing considering the hype, but otherwise decent and competent” major release.

The main notion I wanted to explore in this essay is this perverse situation, where there is seemingly an incentive to release unfinished software as if it were finished, disappoint consumers, still end up profiting majorly and even, sometimes, with a greatly improved product to sell and plenty of praise to receive, in ways that would be difficult to achieve otherwise. “Entitled gamers” (often paying consumers) may be the noisiest about it, and gaming likely has the most prominent and recognized “redemption arcs,” but this situation permeates all sorts of software development, including areas that were once seen as “highly sensitive” or “mission critical”, such as certain embedded systems, communication systems, and finance software.

Note: it is probably a good time to remind readers that these opinions are my own and do not represent the views of my employer, nor my opinions on my employer’s views and strategy.

Software engineering can move fast and break things: the internet gives you the luxury of fixing things later. This is a luxury that not many other engineering disciplines can afford, but it is also a fallacy: you can’t issue a patch to your reputation nor to societal impact as easily as you can deploy a new version of your code.

Internet pervasiveness has been changing the risk management and product iteration paradigms around software development in the last two decades or so. In some embedded systems, the paradigm shift was named the “internet of things”, although embedded systems that can be upgraded remotely have been a thing for decades before the term was popularized – the real change is that there are many more of these now, in categories where they didn’t really exist before, such as home appliances. Connecting more and more things to the internet seemingly becomes more acceptable the easier they are to connect to a network, and many advancements were made in the hardware front to enable this.

In gaming, there is the well known concept of “early access” to clearly label products which are under development, liable to undergo significant changes, but already available to consumers. Some developers use this to great effect, collecting feedback and telemetry from large numbers players to ultimately end up with a better product. Outside of gaming, technically minded consumers have long been familiar with the term “beta testing.” Beta/early access software may be available for purchase (although, admittedly, sometimes discounted) or be provided at no cost to existing users. In any case, consumers enrolling in these programs are aware of what they’re getting into.

Over the last decade or so, I feel that users have been gradually exposed to software and services whose initial versions are less and less complete. Some of this is to be expected and encouraged, such as the aforementioned beta and early access programs that have the potential to improve the final product. But clearly the feedback from beta testing programs didn’t feel sufficient to many developers, who started including more and more telemetry in hopes of collecting more feedback, without users having to manually provide it or specifically opt into any study.

I believe the really objectionable situations are those where the barrenness or incompleteness is initially obscured and then, if users are lucky, iterated upon, at a pace which is often unpredictable. It makes product reviews and testimonials become outdated, and thus mostly useless, quickly. This development model is convenient for the developers, as it theoretically represents the least risk, as ideas, features and business models get the chance to be evaluated sooner, at a time when it is easier to pivot. It becomes possible to spread the development of complex features throughout a longer period of time, while collecting revenue and capturing market share earlier in the process.

Unfortunately, from my point of view, there isn’t much in this go-to-market approach that is beneficial for the clients/users/consumers. Particularly for products that are directly paid for (as one-time purchases or as subscriptions), I’ve often felt that one is paying to become an unpaid beta tester and/or an unwilling subject of a focus group study. The notion of simply opting for an alternative is not applicable when the product is quite unique, or when every worthy alternative is doing the same.

Then there is the update fatigue factor. After such an “early launch,” ideally, the inferior initial product is then quickly updated. But this rarely consists of just a couple updates that make lots of changes in one go. Most likely, the audience will gradually receive multiple updates over time, frequently changing the design, feature set and workflow of the product, requiring constant adaptation. Adding to this annoying situation, these updates may then be rolled out in stages, or as part of A/B testing – leading to confusion regarding the features of the product and how to use them, with different users having varied experiences which are not in agreement, which can almost be seen as gaslighting.

It is difficult to harshly criticize product developers that improve their products post launch, be it by fixing mistakes, adding features or improving performance and fitness for particular purposes. I don’t think I would be able to find anyone who genuinely believes that Cyberpunk shouldn’t have been updated past the initial release, and I am certainly not that person either. It’ll be even harder to find someone who can argue that Gmail should have stayed the exact same as its 2004 release… wait, spam filters aside, maybe that won’t be hard at all. You can easily find people longing for ancient Windows versions, too, etc.

Coming back to Cyberpunk, I think most people who played its initial versions (even those that had a great time) will agree, that it should have been released and initially advertised as an “early access” title. For many players, the experience was underwhelming to the point of being below the expectations set even by many prior indie early access titles. Those who had a great time back then (mostly those playing on PC) will probably also agree that, given all the features added since the 1.0 release, that version might as well have been considered an “early access” one too. Hence why I argue that the problem is not necessarily with the strategy to launch early and update often, but really with the intention of doing so without properly communicating that expectation.

One must wonder if Cyberpunk would have been so critically acclaimed and reached such a complete feature set, years later, if it had released in a more presentable state. I can imagine an alternate universe where the 1.0 version of the game releases with no more than the generally considerable acceptable number of bugs and issues, which get fixed in the next two or three patches over the course of a couple months. The game receives the deserved critical acclaim (that it received anyway – that was controversial too, but I digress) and because it releases in a good state, CDPR never feels pressured into making major changes to add back cut features or to somehow “make up for mistakes.” The end result would be a game where there are maybe one or two DLCs available for purchase, but owners of the base version don’t really see many changes beyond what was initially published – in other words, the usual lifecycle of non-controversial games.

It is possible that there is now a bit of a perverse incentive, to release eagerly awaited games – and possibly other products – in a somewhat broken and very incomplete state that excels only in particular metrics – in the case of Cyberpunk, those metrics would be the story and worldbuilding. This, only so that they can then remain in the news cycle for years to come, as bugs are fixed and features are added, eventually receiving additional critical acclaim as they join the ranks of games with impressive redemption arcs, like No Man’s Sky and Cyberpunk did. To be clear, I think it would be suicidal to do this on purpose, but the truth is that generating controversy then “drip-feeding” features might become more common outside of live service games.

A constant influx of changes to a product can cause frustrate consumers and make it difficult to identify the best option available in the market. Cynics might even say that that’s a large part of the goal: to confuse consumers to the point where they’ll buy based purely on brand loyalty and marketing blurbs; to introduce insidious behavior in software by shipping it gradually across hundreds of continuously delivered updates, and making it impossible to distinguish and select between “known good” and “known bad” releases.

I must recognize that this ability to update almost anything at any time is what has generally made software more secure, or at least as secure as it’s ever been, despite threats becoming more and more sophisticated. For networked products, I will easily choose one that can and does receive updates over one that can’t, and I am even more likely to choose one I can update myself without vendor cooperation.

Security has been a great promoter of, and/or excuse for, constant updates and the discontinuation of old software versions and hardware models. The industry has decided, probably rightly, that at least for consumer products, decoupling security updates from feature changes was unfeasible, and it has also decided, quite wrongly in my view, that it was too unsafe to give users the ability to load “unauthorized” software and firmware. This is another decision that makes life easier for the developers, and has no upsides that I can think of, for users. In some cases, the lack of security updates has even been pushed as a way to sell feature updates. For that upselling strategy to work, it’s important that users can’t develop and load the updates/fixes themselves.

I am sure that people in marketing and sales departments worldwide will argue that forcing the pushing of feature updates onto users is positive, using happy-sounding arguments like “otherwise they wouldn’t know what they’re missing out on.” I am sure I’ve seen this argument being made in earnest, perhaps more subtly, but: it should be obvious to everyone outside of corporate, and more specifically outside of these departments, that in practice this approach just reduces user choice and is less respectful of users than the alternative. Funnily enough, despite being used as a punching bag throughout this essay, Cyberpunk is one of the products that, despite a notable amount of feature updates, also respects user freedom the most, as the game is sold DRM-free and CDPR makes all earlier versions of the game available for download through GOG. And this is a product whose purpose is none other than entertainment – now if only we had the same luxury regarding things which are more essential to daily life.

The truth is that – perhaps because of the lack of alternatives in many areas, or simply because of a lack of awareness and education – the public has either decided to accept the mandatory and often frequent feature updates, or decided that they have no option but to accept them. This is where I could go on a tangent about how, despite inching closer and closer every year (in recent years, largely thanks to Valve – thanks Valve!), the year of the Linux desktop probably won’t ever come – but I won’t, that will be a rant for another time; you see, it’ll be easier to write when desktops aren’t a thing anymore.

Until this “release prematurely, and force updates” strategy starts impacting profits – and we’re probably looking at decade-long cycles – it won’t be going away. And neither will this increasingly frequent feeling that one is paying to be a beta tester – be it directly with money, through spending our limited attention, or by sharing personal and business data. The concept of the “patient gamer,” who waits months or even multiple years after games release to buy them at their most complete (and often cheapest) point, might just expand to an increasing number of “patient consumers” – often, much to the detriment of their IT security.