I did not want to close the year without adding something to this website, and to keep with the theme of the blog post series I might never finish, here is another post about Watch Dogs… but this one is more of an audiovisual experience.
That’s right: for comedic purposes, I used Watch Dogs to make an high-effort recreation of the first trailer of GTA VI. Despite still being just a trailer, a bunch of rumors, and a vaster-than-usual collection of leaks, that trailer may have hyped and warmed some people’s hearts more than whatever “game of the year” had – and we are talking about a year which had a bunch of very good games releasing. You can tell that piece of media from Rockstar Games tickled something in me too, or I wouldn’t have spent probably over fifty hours carefully recreating it using a game from a different series.
Soon after the trailer dropped, I got the feeling that I wanted to parody it in some way. The avalanche of trailer reaction content that came immediately after its release – including from respectable channels like Digital Foundry, who probably spent as much time analyzing the trailer from a technical perspective, than they spend looking into some actually released games – had me entertain the idea of making a “Which GTA VI trailer analysis is right for you?” sort of meta-analysis meme video. But I realized that making it properly would require actually watching a large portion of that reaction content, and I was definitely not feeling like it. It would also require making a lot of quips about channels and content creators I am not familiar with. Overall, I don’t think it would have been a good use anyone’s time: I wouldn’t have had as much fun making it, and it wouldn’t be that fun to watch.
The idea of recreating the trailer in other games is hardly original, after all, I have heard about at least two recreations of the trailer in GTA V, there’s at least one in GTA San Andreas as well, I hope some have made in Vice City because it just makes sense, and in the same vein as mine, there are also recreations in different game series, including in Red Dead Redemption and in Saints Row. As far as I know, mine is the first one made in Watch Dogs.
I did not intentionally mean anything with the use of a game whose reception was controversial because of trailers/vertical slice demos that hyped people up for something that, according to many, was not really delivered in the final game (hence the nod to E3 2013 at the start – RIP E3, by the way). Nor is the idea here to say that Watch Dogs, a 2014 game, looks as good as what’s pictured in the trailer for GTA VI, a game set to release eleven years later. Largely, I chose this game because, for one, I like Watch Dogs even if I am not the most die-hard fan you’ll find; because it is the only game other than GTA V where I have some modding experience; and because nobody had done it using Watch Dogs.
This was everything but easy to pull off: the game doesn’t even have a conventional photo mode, let alone anything like the Rockstar Editor or Director Mode in GTA V. There aren’t many mods for the games in the Watch Dogs series, especially not the two most recent ones, and the majority of these mods aren’t focused on helping people make machinima. One big exception is the camera tools that I am using, and even that was primarily built for taking screenshots – keep in mind I had to ask the author for a yet-to-be-released version that supported automatic camera interpolation between two points.
I started by recreating just a few shots from the beginning of the trailer. I liked those brief seconds of video so much, and they sparked enough interest in the modding community, that I slowly went through recreating the rest. This required bringing more mods into the equation – including a WIP in-game world editor that was released with light protection measures (probably to avoid people bringing it into online modes or adding it into shitty mod merge packs?) which I had to strip, so I could make it play along the rest of the tools I was using, including some bespoke ones.
Lots of Lua code was injected into the game in the making of this video, and as I said, this is more for comedy and that sense of pride and accomplishment, rather than any sort of game/mod showcase… but I’m happy to report that besides some minor retiming, color grading and artificial camera shake and pan effects, all shots were achieved in-engine with minor visual effects, other than the two shots involving multiple bikes on screen, that required more trickery.
Then there was the careful recreation of every 2D element in Rockstar’s video, including avatars, icons, text placement, and a hours-long search for fonts whose results I am still not 100% happy with. One of the fonts Rockstar used is definitely Arial, but with a custom lowercase Y… I no longer have those notes, but at one point I could even tell you which font foundry was most likely to have supplied the one in question. And did I mention how I also recreated the song cut Rockstar used, so I wouldn’t have to rely on AI separation with all its artifacts?
I think it was while working on the “mud club” shot that I realized I just wouldn’t be able to recreate everything as precisely as I would like. One idea that crossed my mind was to use the infamous spider tank in place of the monster truck in that shot, but I just wasn’t find an easy way to have the spider tank there with the proper look, while still being able to control my mods. Sure, there were multiple technical solutions for it, but that would have meant spending days/weeks just on those two or so seconds of video. I also wouldn’t have been able to find matching animations for the characters. So I decided to take some shots in a different direction that alludes to the setting of Watch Dogs.
Eventually, I let that creative freedom permeate other points of the video. For example, the original “High Rollerz Lifestyle” shot would have been somewhat easy to recreate (the animations for the main character in it notwithstanding) but I felt I had already proven I could recreate easy shots, so I decided to have some fun with it and instead we ended up with “High Hackerz.” Similarly, the final shot features three protagonists instead of two, because I couldn’t decide which one was the most relevant “second character” in the world of Watch Dogs.
The end result seems to have been received to great acclaim, judging by all the public and private praise I’ve been receiving. There are people asking me to continue making this sort of thing, too, which I am not sure is something I want to pursue, especially not on a regular basis – I think a large portion of the fun I had making this, was precisely because this has a sufficiently closed scope and was sufficiently distinct from what I usually do, and I suspect I would have a worse time making more open-scoped machinima, particularly in this game where the tooling is only “limited but functional.”
There are also people asking for this sort of thing done in Watch Dogs 2 rather than in the first game – but there are even fewer mods for that game, and I have even less knowledge of its internals. Judging by the title of Rockstar’s trailer, it’s likely there will be at least a second trailer, so maybe I can combine the wishes of both sets of people by then. It’s probably not something I’ll feel the drive to do, though – it will also depend on how busy I am with life by the time that second trailer releases.
As I was taking care of the last shots and editing tweaks, I was definitely feeling a bit tired of this project, and subconsciously I probably started taking some shortcuts. Looking back on the published result, there are definitely aspects I wish I would have spent some more time on. There is an entire monologue section missing from the trailer which I can pass off as an artistic decision, but the truth is that I only realized I hadn’t recreated/found a replacement for it after the video was up on YouTube. Similarly, for the effort this took, I wish I had captured the game at a resolution higher than 1080p (my monitor’s vertical resolution), because after going through editing (having to apply cropping, zooming, etc.) the quality of the video really suffers in some aspects. But the relevancy of this meme was definitely dropping by the day as time went on, and if I had spent much more time on it, not only would I have been sick and tired of the entire thing, the internet would also have moved on. It is what it is, and once again similarities are found between art and engineering: compromises had to be made.
One thing is for sure, the next video I publish on my YouTube channel is unlikely to live up to these newfound expectations, and I like to think that I have learned enough to deal with that. Meanwhile, and on the opposite note, I hope that 2024 lives up to all of your expectations. Have a great new year!
Before I start, a word about this website. It has mostly sat abandoned, as having a full-time software development job doesn’t leave me with the comparatively endless amounts of free time and mental bandwidth I once had. What remains, in terms of screen time, is usually spent working on other projects or doing things unrelated to software development that require lower amounts (or different kinds) of mental activity, like playing ViDYAgAMeS, arguing with people on Discord, or mindlessly scrolling through a fine selection of subreddits and Hacker News. While I quite enjoy writing, it’s frequently hard to find something to write about, and while I have written more technical posts in the past – this one about MySQL being the most recent example – these often feel too close to my day job. So, for something completely different, here’s some venting about a video game series – this was in the works for over a year, and is my longest post yet. Maybe this time I’ll actually manage to start and finish a series of blog posts.
Watch Dogs is an action-adventure video game series developed and published by Ubisoft and it is not their attempt at an Animal Crossing competitor, unlike the name might suggest. The action takes place in open worlds that are renditions of real-life regions; at the time of writing, there are three games in the series: Watch Dogs (WD1), released in 2014 and set in a near-future reimagination of Chicago; Watch Dogs 2 (WD2), a 2016 game set in a similar “present-day, but next year” rendition of the San Francisco Bay Area, and Watch Dogs: Legion (WDL), a 2020 game set in a… uh… Brexit-but-it’s-become-even-worse version of London. The main shtick of these games, in comparison with others in the same genre, is their heavy focus on “hacking,” or perhaps put more adequately, “an oversimplification, for gameplay and storytelling purposes, of the new delicate information security and societal challenges present in our internet-connected world.”
The games fall squarely into two categories: “yet another Ubisoft open world game” and what some people call “GTA Clones.” It’s hard to argue against either categorization, but the second one, in itself, has some problems. The three Watch Dogs games came out after the initial release of the latest entry in the Grand Theft Auto series (GTA V in 2013), and GTA VI is yet to be officially announced, so snarky people like me could even say that, if anything, Watch Dogs is a continuation, not a clone, of GTA!
More seriously, there are people on the internet who will happily spend some time telling you how “GTA clone” is a terrible designation that is actually hurting open world games in general, by discouraging developers from making more open world games with a modern setting – and I generally agree with them. But I prefer to attack this “GTA clone” designation in a different way, the childish one, where you point the finger back the accuser and yell “you too!”: GTA Online has, in multiple of its updates, also “cloned” some of the gameplay elements most recently seen in Watch Dogs, and GTA in general has also taken inspiration from different open world games that were released over the years.
In a 2018 update, Rockstar brought a “Player Scanner” to GTA Online, which is reminiscent of the “Profiler” in Watch Dogs games, and in the same update, they also introduced weaponized drones that can be compared to the drone in WD2. More recently, GTA Online received a new radio station whose tracks are obtained from collectibles spread around the world – similar to how the media player track list can be expanded in WD1. I doubt that Watch Dogs was the primary motivation or inspiration for these mechanics, and they were hardly exclusive to Watch Dogs, but the point is that the “cloning” argument can go both ways.
Nowadays, when it comes to open world games, there’s hardly anyone “cloning” a particular game series. Watch Dogs games are GTA competitors, but the same can be said about countless other games, including many that don’t even make use of open world mechanics. None of this negates the fact that, despite not being a “GTA clone,” Watch Dogs ticks all the boxes of said unfortunately named category, for which a better name would totally be “open world games set in a place recognizable as the world we presently live in.” And therefore I won’t hide the fact that many of the comparisons I’ll make will be directly against the two “HD Universe” GTA titles, IV and V, as these are definitely the most well-known and successful games in said category.
I have played through all three games in the Watch Dogs series, on PC. I’m certain I spent more time than the average player in the first two games, having played through both twice, going for the completionist approach the first time I played both of them, and having spent more time than I’d like to admit in the multiplayer modes of WD1 and WD2. By “completionist approach,” I mean getting the progression meter to 100% in the first game, and going for all the collectibles spread around the map in WD2, in addition to completing all the missions. Why? Because, in general, I found their gameplay and virtual worlds enjoyable, regardless of their story or general “theme.”
While players and Ubisoft marketing tend to overly focus on the “hacking” aspect of the series, in my opinion its most distinctive aspect, compared to other open world games, is the fact that more than being a shooter, these can be open world puzzle games, requiring some thought when approaching missions, especially when opting for a stealthier approach. Mainly in the most recent two games, and to some degree in the first one too, there are typically multiple approaches to completing missions, catering to wildly different play styles. This extends even to their multiplayer aspects and adds to the replayability of the games. For example, I went for a mainly “guns blazing” approach on my first WD2 playthrough and settled with a “pacifist” approach when I revisited WD2 for a second time – which, in my opinion, is the superior way to get through the game’s story. But let’s not get ahead of ourselves.
Initially, I was going to write a single post with my thoughts about the three games. As I was writing some notes on what I wanted to say, I realized that a single post would be insufficient – even the individual posts per game are going to be exhaustively long. So I decided to write separate posts, in the order the games have been released, which is also the order I have played them. This post will be about the first Watch Dogs, and the next one will be about its sole major DLC, called Bad Blood.
My notes file for the whole series has over 200 bullet points, so hold on to your seats. Before we continue onto WD1, I just want to mention one more thing: I’m going to assume you have some passing familiarity with the three games, even if you have not played them yourself. I won’t be doing much more series exposition; I mostly want to vent about it, not write a recap. Still, I’ll try to give a bit of a presentation on each thing I’ll talk about, so that those who have played the games before can have a bit of a recap, and so that those who haven’t – but for some reason are still reading this – aren’t left completely in the dark.
Onto what is probably the lengthiest ever rant/analysis/retrospective of WD1. Enjoy!
“An appeal to celebrity is a fallacy that occurs when a source is claimed to be authoritative because of their popularity” [RationalWiki]
Today I was greeted by this Discord ping:What I want to talk about is only very tangentially related to what you see above, and the result of some shower thoughts I had after reading that. I did not watch the video, and I do not intend to, just like I haven’t watched most of DarkViperAU’s “speedrunner rambles” or most of his other opinion/reaction videos about a multitude of subjects and personalities. My following of these YouTube drama episodes hasn’t gone much beyond reading the titles of DVAU’s videos as they come up on my YouTube subscriptions feed. What I want to talk about is precisely why I don’t watch those videos and why I think that many talented “internet celebrities” or “content creators” would be better off not making them, and/or why the fans who admire them for their work alone would be better off ignoring that type of content.
…
OK, I was planning on writing a much longer post but I realized that my arguments would end up being read as “reaction videos and YouTube drama are bad and you’re a bad person if you like them”, which is really not the argument that I want to make here. Instead, let me cut straight to the chase:
Just because you admire someone’s work very much,
that doesn’t mean that you must admire its creators just as much,
nor that you should agree with everything they say
(nor that everything they say and do is right),
and the high quality of some of their work does not necessarily make them quality people nor makes all of their work high-quality.
This is one of those things that is really obvious in hindsight. Yet I often find it hard to detach works from their creator, and I believe this is the case for a majority of people, otherwise the “appeal to celebrity” fallacy would not be so common, and there wouldn’t be so many people interested in knowing what different celebrities have to say in areas that have nothing to do with what made them popular and successful in the first place.
This is not a “reaction/opinion pieces are bad” argument. If someone’s most successful endeavor is precisely to be an opinion maker, then I don’t see why they shouldn’t be cherished for that, and their work celebrated for its quality. But should you not like their work, you’re still allowed to like them as a person, and vice-versa.
DarkViperAU is an example of a “newfound internet celebrity” I admire for much of their work but who is progressively also veering off to a different type of content/work (of the “opinion making” type) which, if I were to pay attention to it, could greatly reduce my enjoyment of the parts of his content that I find great. For me, the subject of today’s ping on his Discord was a great reminder of that, and sent me off in a bit of a shower thought journey.
While I am not fond of end-of-year retrospectives – calendar conventions do not necessarily align with personal milestones – 2020 was definitely the most awkward year in recent times for a majority of the world population. It was an especially awkward year for me, as among many other things, it was when I fell into what I’d describe as an “appeal to a celebrity’s work” fallacy. I initially believed I’d really like to work with people who make a project I admire very much, but over the months I found some of their methods and personalities to really conflict with my personal beliefs, and yet, I kept giving my involvement second chances, because I really felt like the project could use my contribution.
In the end, there’s no problem in liking an art piece exclusively because of its external appearance, even if you are not a fan of the materials nor of some of its authors. And if you think you can improve on that piece of art, expect some resistance from the authors, keeping in mind it might fall apart as you attempt to work on it. Sometimes making your own thing from scratch is really the better option: you might be called an imitator and the end result may even fall short of your own expectations, but you’ll rest easy knowing that you have no one but yourself to blame.
On a more onward-looking note, I wish you all the best for the years to come after 2020. I have a new Discord server which, unlike the UnderLX one, is English-speaking and not tied to any specific project or subject. My hope is to get in there those who I generally like to talk to and work with, so we can all have a great time – you know, the typical thing for a generalist Discord server. I know this is an ambitious goal for just yet another one of these servers, but that won’t stop me from trying. My dear readers are all invited to join Light After Dinner.
Dear regular readers: we all know I’m not a regular writer, and you were probably expecting this to be the second post on the series about internet forums in 2018. That post is more than due by now – at this rate it won’t be finished by the end of the year – even though the series purposefully never had any announced schedule. I apologize for the delay, but bear with me: this post is not completely unrelated to the subject of that series.
Discord, in case you didn’t know, is free and proprietary instant messaging software with support for text, voice and video communication – or as they put it, “All-in-one voice and text chat for gamers that’s free, secure, and works on both your desktop and phone.” Launched in 2015, it has become very popular among gamers indeed – even though the service is definitely usable and useful for purposes very distant from gaming, and to people who don’t even play games. In May, as it turned three years old, the service had 130 million registered users, but this figure is certainly out of date, as Discord earns over 6 million new users per month.
If you have ever used Slack, Discord is similar, but free, easier to set up by random people, and designed to cater to everyone, not just businesses and open source projects. If you have ever used Skype, Discord is similar, but generally works better: the calls have much better quality (to the point where users’ microphones are actually the limiting factor), it uses less system resources than modern Skype clients on most platforms, and its UI, stability and reliability doesn’t get worse every month as Microsoft decides to ruin Skype some more. You can have direct conversations with other people or in a group, but Discord also has the concept of “servers”, which are usually dedicated to a game, community or topic, and have multiple “channels” – just like IRC and Slack channels – for organizing conversations and users into different topics. (Beware that despite the “server” name, Discord servers can not be self-hosted; in technical documents, servers are called “guilds”).
Example of Slack bot in action. Image credit: Robin Help Center
Much like in Slack (and, more recently, Skype, I believe), bots are first-class citizens, although they are perhaps not as central to the experience as in many Slack communities. In Discord, bots appear as any other user, but with a clearly visible “bot” tag, and they can send and receive messages like any other user, participate in text in voice chats, perform administrative/moderation tasks if given permission… to sum it up, the only limit is how much code is behind each bot.
Example of Discord bot in action. Discord bots can also join voice channels, e.g. to play music.
I was introduced to Discord by a friend in the end of 2016. We were previously using Skype, and Discord was – even at the time – already clearly superior for our use cases. I found the “for gamers” aspect of it extremely cheesy, so much that for a while it put me off of using it as a Skype replacement. (At the time, we were using Skype to coordinate school work and talk about random stuff, and at the time, I really wasn’t a “gamer”, on PC or any other platform). I finally caved in, to the point where I don’t even have Skype start with my computers anymore, and the Android app stays untouched for weeks – I only open it to talk to the two or three people who, despite heavy encouraging, didn’t switch to Discord. It’s no longer the case, but the only thing Discord didn’t have back then was screen sharing, but it was so good that we kept using it and went with makeshift solutions for screen sharing.
As time went by, I would go on to advocate for the use of Discord, join multiple servers, create my own ones and even build a customized Discord bot for use in the UnderLX Discord server. Discord is pleasant to use, despite the fact that it tends to send duplicate messages under specific terrible network conditions – the issue is more prominent when using it on mobile, at least on Android, over mobile data.
Those who have been following what I say on the internet for longer, might be surprised that I ended up using and advocating for the use of a proprietary chat solution. After posts such as this one where I look for a “free, privacy friendly” IM/VoIP solution, or the multiple random forum posts where I complain that all existing solutions are either proprietary and don’t preserve privacy/prevent data collection, or are “for neckbeards” for being unreliable or hard to set up, seeing me talk enthusiastically about Discord might make some heads spin.
I suppose this apparent change of heart is fueled by the same reason why many people, myself included, use the extremely popular digital store, DRM platform (and wannabe Discord competitor… a topic for later) Steam: convenience. It’s convenient to use the same store, launcher and license enforcer for all games and software; similarly, it’s convenient to use the same software to talk to everyone, across all platforms, conversation modes, and topics. It’s an exchange of freedom and privacy for convenience.
Surprise, surprise: it turns out that making a free-as-in-freedom, libre if you prefer, platform for instant messaging that provides the desired privacy and security properties, in addition to all the features most people have come to expect from modern non-free platforms like Facebook Chat or Skype, while being as easy to use as them, is very difficult. Using the existing popular platforms does not involve setting up servers, sharing IP addresses among your contacts, dealing with DDoS attacks against those servers or the contacts themselves, etc. and for an alternative platform to succeed, it must have all that, and ideally be prepared to deal with the friction of getting everyone and their contacts to use a different platform. It was already difficult in 2013 when I wrote that post, and the number of hard-to-decentralize features in the modern chat experience didn’t stop growing in these five years. The technology giants are not interested in developing such a platform, and independent projects such as Matrix.org are quite promising but still far from being “there”. And so everyone turns to whatever everyone else is using.
In my opinion, Discord happened to be the best of the currently available, viable solutions that all my friends could actually use. It is, or was, a company and a product focused on providing a chat solution that’s independent from other products or larger companies, unlike Messenger, Hangouts or Skype, which come with all the baggage from Facebook, Google and Microsoft respectively. Discord, despite having the Nitro subscription option that adds a few non-essential features here and there, is basically free to use, without usage limits – unlike Slack, which targets company use and charges by the user.
List of Discord Nitro Perks in the current stable version of Discord. Discord is free to use, but users can pay $4.99/month or ten times that per year to get access to these features.
What about sustainability, what is Discord’s business model? To me it was painfully obvious that Nitro subscriptions couldn’t make up for all the expenses. Could they just be burning through VC money only to die later? Even by selling users’ data, it wasn’t immediately obvious to me that the service would be sustainable on its own. But I never thought too much about this, because Discord is super-convenient, and alternative popular solutions run their own data collection too, so I just shrug and move on. If Discord eventually ran out of money, oh well, we’d find an alternative later.
Back to praising the product, Discord is cross-platform, with a consistent experience across all platforms, and can be used in both personal/informal contexts and work/formal contexts. In fact, Discord was initially promoted to Reddit communities as a way to replace their inconvenient IRC servers, and not all of those communities were related to gaming. If only it didn’t scream “for gamers” all over the place…
I initially dismissed this insistent targeting of the “gamers” market as just a way to continue the segmentation that already existed… after all, before Discord there was TeamSpeak, which was already aimed at gamers and indeed primarily used by them. By continuing to target and cater to this very big niche, Discord avoided competing head-to-head with established players in the general instant messaging panorama, like the aforementioned Skype, Facebook Messenger and Hangouts, and also against more mobile-centric solutions like WhatsApp or Telegram.
I believed that at some point, Discord would either gradually drop the “chat for gamers” moniker, or introduce a separate, enterprise-oriented service, perhaps with a self-hosting option, although Slack has taught us that isn’t necessary for a product to succeed in the enterprise space. This would be their true money-maker – after all, don’t they say the big money is on the enterprise side of things? Every now and then I joked, half-seriously, “when are they going to introduce Discord for Business?”
I was half-serious because my experience using Discord, a supposedly gaming-oriented product, for all things non-gaming, like coordinating an open source project or working remotely with my colleagues, was superb, better than what I had experienced in my admittedly brief contact with Slack, or the multiple years throughout which I used Skype and IRC for such things. The “for gamers” aspect was really a stain in what is otherwise a product perfectly usable in formal contexts for things that have nothing to do with playing games, and in some situations stopped me from providing my Discord ID and suggesting Discord as the best way to contact me over the internet for all the things email doesn’t do.
These last few days, Discord did something that solved the puzzle for me, and made their apparent endgame much more clear. It turns out their focus on gaming wasn’t just because the company behind Discord was initially a game development studio that had pivoted into online chat, or because it was a no-frills alternative to TeamSpeak (and did so much more), nor because it was an easy market to get into, with typically “flexible” users that know their way around installing software, are often eager to try new things, use any platform their parents are not on, and share the things they like with other players and their friends. I mean, all of these could certainly have been factors, but I think there’s a bigger thing: it turns out Discord is out to eat Steam’s (Valve’s) lunch. Don’t believe me? Read their blog post introducing the Discord Store.
In hindsight, it’s relatively obvious this was coming, in fact, I believe this was the plan all along. It’s a move so genius it must have been planned all along. Earn the goodwill of the gamer community, get millions of gamers who just want a chat client that’s better than what Steam and Skype provide while being as universal as those among the people they want to talk to (i.e. gamers), and when the time is right, become a game store which just happens to have the millions of potential clients already in it. It’s like organizing a really good bikers convention, becoming famous for being a really good bikers convention, and then during one year’s edition, ta-da! It’s also a dealership!
The most interesting part about all this, in my opinion, is that Discord and Steam’s histories are, in a way, symmetrical. Steam, launched in 2003, was created by Valve – initially a game development company – as a client for their games. Steam would evolve to be what’s certainly the world’s most recognizable and popular cross-platform software store and software licensing platform, with over 150 million users nowadays (and this number might be off by over 30 million). As part of this evolution, Steam got an instant messaging service, so users could chat with their friends, even in-game through the Steam overlay. After a decade without major changes, a revamped version of the Steam chat was recently released, and it’s impossible not to draw comparisons with Discord.
The recently introduced Steam Chat UI. Sure, it’s much nicer, and you can and should draw comparisons, but it’s no Discord… yet.
I had the opinion that Steam could ditch its chat component altogether and just focus on being great at everything else they do (something many people argue they haven’t been doing lately), and I wasn’t the only one thinking this. We could just use Discord, whose focus was being a great chat software, and Steam could focus on being a great store. But now, I completely understand what Valve has done, and perhaps their major failure I can point out right now was simply taking too long to draft a reply. Because, on the other, “symmetrical” side of the story…
Discord was developed by Hammer & Chisel, recently renamed Discord Inc., a game development studio founded in 2012, which only released one unsuccessful game before pivoting into what they do now – which used to be developing an instant messaging platform, but apparently now includes developing an online game store too. Discord, chat software that got a store; Steam, a store that got chat functionality, both developed by companies that are or once were into game development. Sadly, before focusing on the game store part of things, Discord, Inc. seems to have skipped the part where they would publish great games, their sequels, and stop as they leave everyone asking for the third iteration.
It is my belief that it was not too long after Discord became extremely successful – which, in my opinion, was some time in 2016 – and a huge amount of gamers got on it, that they set their eyes on becoming the next Steam. It’s not just gamers they are trying to cater to, as they started working with game developers to build stuff like Rich Presence long ago, not to mention their developer portal was always something focused not just on Discord bots, but applications that authenticate against Discord and generally interact with it. This certainly helped open communication channels with some game developers, which may prove useful to get games on their store.
Discord is possibly trying to eat some more lunches besides Valve’s, too. Discord Nitro (their subscription-based paid tier, which adds extra features such as the ability to use custom emoji across all servers or upload larger files in conversations) has always seemed to me as a poor value proposition, but I obviously know this is not the universal opinion, as I have seen multiple Nitro subscribers. Maybe it’s just that I don’t have enough disposable income; anyway, Nitro just became more interesting, as now “It’s kinda like Netflix for games.” From what I understand, it’ll work a bit like Humble Monthly, but it isn’t yet completely clear to me whether the games are yours to keep – like on Humble Monthly – or if it’s more like an “extended free weekend” where Nitro users get to play some games for free while they are in rotation. (Update: free games with Discord Nitro will not be permanent)
This Discord pivot also presents other unexpected ramifications. As you might now, on many networks all game-related stuff (like Steam) is blocked, even though instant messaging and social networks are often not blocked as they are used to communicate with clients, suppliers, or even between co-workers, as is the case with Slack. I fear that by introducing a store, Discord will fall even more into the “games” bucket, and once it definitively earns the perception of being a games-only thing, it’ll be blocked in many work and school networks, complicating its use for activities besides gaming. The positive side of things is that if they decide launching that enterprise version, this is an effective way of forcing businesses to use it instead of the free version, as the “general populace” version will be too tightly intertwined with the activity of playing games.
I’ll be honest… things are not playing out the way I wish they would. Discord scares me because now I feel tricked and who knows what other tricks they have up their sleeve. I would rather have an awesome chat and an awesome store, provided separately, or alternatively, an awesome chat and store, all-in-one. (And if the Discord team reads this, they’ll certainly say “but we’re going to be the awesome chat and store, all-in-one!”) But at this rate, we’ll have two competing store-and-chat-platforms… because we didn’t have enough stores/game clients or instant messengers, right?
Because of course this one had to be here, right? I could also have added a screenshot of Google’s IM apps, but I couldn’t bother finding screenshots of all of them, let alone installing them.
You can of course say, “just pick one side and your life will be simpler”, but we all know this won’t be the case. Steam chat is a long way from being as good as Discord, and the Discord store will certainly take its time to be a serious Steam competitor. Steam chat will never sound quite right for many of Discord’s non-officially assumed use cases; for example, even if Steam copies all of Discord’s features and adds the concept of servers/guilds, it’ll never sound quite right to have the UnderLX server on Steam, will it? (Well… unless maybe UnderLX pivots into something else as well, I guess). Similarly, I’ll be harder to “sell” Discord’s non-gaming use cases by telling people to ignore the “for gamers” part, as I’ve been doing, if Discord is blatantly a game store and game launcher.
Of course I’ll keep using Discord, but I’ll probably not recommend it as much now, and of course I’ll keep using Steam, and mostly ignoring its chat capabilities – even because most people I talk to are not in there, and most of those that are, are also on Discord. But for now, I’ll keep the games tab on Discord disabled, and I seriously hope they’ll keep providing an option to disable all the store/launcher stuff… so I can keep hiding the monster under the bed.
“We’re adding another dimension to computing.”
Cliché and meaningless, but go on. I guess this is something revolutionary…
“Where digital respects the physical.”
Because, currently, digital somehow violates the laws of the universe?
“And they work together to make life better.”
“they”? who’s “they”? Oh, digital and physical, sorry. You really should learn to use commas instead of periods. So the digital and physical work together, uh? My guess is that this is about a robot.
“Magic Leap One is built for creators who want to change how we experience the world.”
Finally, now you’ve told me who it is for, and it probably helps to “change how we experience the world”, because it’s for creators who want to do that.
You still haven’t told me what it is or what it does, and bonus points for jamming the “creators” cliché in there! Congratulations, you have passed your final assignment in Unicorn University of Shitty Vaporware Descriptions with the grade of: flying colors !
The day I turn this website into a portfolio/CV-like thing will come sooner or later, and arguably that’s a better use for the domain gbl08ma.com than this blog with posts nobody cares about – except when I rant about new operating systems from Microsoft. But if you really care about such posts, do not worry: the blog will still exist, it just won’t be as prominent.
Meanwhile, and off-topic intro aside, the content usually seen on such presentation websites everyone-and-their-cat seems to have these days, will have to wait. In anticipation for that kind of stuff, let’s go in a kind of depressing journey through my eight years programming experience.
The start
The beginning was what many people would consider a horror movie: programming in Visual Basic for Applications in Excel spreadsheets, or VBA for short. This is (or was, at the time; I have no idea how it is now) more or less a stripped down version of VB 6 that runs inside Microsoft Office and does not produce stand-alone executables. Everything lives inside Office documents.
It still exists – just press Alt+F11 in any Office window. Also, the designer has Windows 7 Basic window styles… on Windows 10, which supposedly ditched all that?
I was introduced to it by my father, who knows his way around Excel pretty well (much better than I will probably ever will, especially as I have little interest). My temporal memory is quite fuzzy and I don’t have file timestamps with me for checking, so I was either 9, 10 or 11 years old at the time, but I’m more inclined to think 9-10. I actually went quite far with it, developing a Excel-backed POS system with support for costumer- and operator-facing character LCD screens and, if I remember correctly, support for discounts and loyalty cards (or at least the beginnings of it).
Some of my favorite things I did with VBA, consisted in making it do things it was not really designed for, such as messing with random ActiveX controls and making it draw strange-looking windows (forms) and controls through convoluted Win32 API calls I’d have copied from some website. I did not have administrator rights to my computer at the time, so I couldn’t just install something better. And I doubt my Pentium III-powered computer, already ancient at the time (but which still works today), would keep up with a better IDE.
I shall try to read these backup CDs and DVDs one day, for a big trip down the memory lane.
Programming newb v2
When I was 11 or 12 I was given a new computer. Dual core Intel woo! This and 2GB RAM meant I could finally run virtual machines and so I was put on probation: I administered the virtual computers, and soon the real hardware followed (the fact that people were tired of answering Vista’s UAC prompts also helped, I think). My first encounter with Linux (and a bunch other more obscure OS I tried for fun) was around this time. (But it would take some years for me to stop using Windows primarily.)
Around this time, Microsoft released the Express (free) editions of VS 2008. I finally “upgraded” to VB.NET, woo! So many new things to learn! Much of my VBA code needed changes. VB.Net really is a better VB, and thank Microsoft for that, otherwise the VB trauma would be much worse and I would not be the programmer I am today. I learned much about the .NET framework and Visual Studio with VB.NET, knowledge that would be useful years later, as my more skilled self did more serious stuff in C#.
In VB.NET, I wrote many lines of mostly shoddy code. Much of that never saw the light of day, but there are some exceptions: multiple versions of Goona Browser made their way to the public. This was a dual-engine web browser with advanced UI, and futuristic concepts some major players copied, years later.
How things looked like, in good days (i.e. when it didn’t crash). Note the giant walls of broken English. I felt like “explain ALL the things”! And in case you noticed the watermark: yes, it was actually published to Softpedia.
If you search for it now, you can still find it, along with its website which I made mostly from scratch. All of this accompanied by my hilariously broken English, making the trip to the past worth its weight in laughs. Obviously I do not recommend installing the extremely buggy software, which, I found out recently, crashes on every launch but the first one.
Towards the later part of my VB.NET era, I also played a bit with C#. I had convinced myself I wanted to write an operating system, and at the time there was a project called COSMOS that allowed for writing (pretty limited) OS with C#… of course my “operating” systems were not much beyond a fancy command line prompt and help command. All of that is, too, stored in optical media, somewhere… and perhaps in the disk of said dual-core computer. I also studied and modified open source programs made in C# (such as the file downloader described in the Goona Browser screenshot) for my own amusement.
All this happened while I developed some static websites using Visual Web Developer Express as editor. You definitely don’t want to see those (mostly never published) websites, but they were detrimental to learning a fair bit of HTML and CSS. Before Web Developer I had also experimented with Dreamweaver 8 (yes, it was already old back then) and tried my hand at animation with Flash 8 (actually I had much more fun using it to disassemble existing SWFs).
Penguin programmer
At this point I was 13 or so, had my first contact with Linux more than done, through VMs and Live CDs, aaand it happened: Ubuntu became my main OS. Microsoft “jail” no more (if only I knew what a real jailed platform was at the time…). No more clunky .NET! I was fed up with the high RAM usage of Goona Browser, and bugs I was having a hard time debugging, due to the general code clumsiness.
How Ubuntu looked like when I first tried it. Good times. Canonical, what did you do?
For a couple of years, in terms of desktop development, I only made some Python scripts for my own amusement and played a very small bit with MonoDevelop every time I missed .NET. I also made a couple Lua scripts for Rockbox. I learned much about Linux usage and system maintenance as I used it more and more on my own computers and on my first Virtual Private Servers, which I got after much drama in the free web hosting communities. Ugh, how I hate CPanel.
It was around this time that g.ro.lt and n.irc.su appeared. g.ro.lt was a URL shortener that would later evolve into 4.l.to and later tny.im. n.irc.su was a social network built on Elgg, which obviously failed. I also made some smaller websites, like one that would take you to random image hosting websites, URL shorteners and pastebins, so you would not use the same service every time you urgently needed one. These represented my first experiences with PHP programming.
I have no pictures to show. The websites are long gone, not on the Internet Archive, and if I took screenshots, I have no idea where I put them. Ditto for the logos. I believe I still have the source code for the random-web-service website somewhere, at least the front page layout.
All this working on top of free stuff: free (and crappy) subdomains, free (and crappy) web hosting, free (and less crappy) virtual servers. It would take me some time until I finally convinced myself I needed to spend some money for better reliability, a gist of support and less community drama. And even then I would spend Bitcoin, which I earned back when it was really cheap, making the rounds of silly faucets and pulling money out of CPAlead-like offers through the use of multiple proxies (oh, the joy of having multiple VPS…). To this day I still don’t have a PayPal account.
This time, and when I actively developed tny.im (as opposed to just helping maintain it), was the peak of my gbl08ma-as-web-developer phase. As I entered and went through high school, I would get more and more away from HTML and friends (but not server maintenance), to embrace something completely different…
Low level, little resources: embedded systems
For high school math everyone had to use a graphing calculator. My math teacher recommended (out of any interest) Casio calculators because of their ease of use (and even excitedly mentioned, Casio leaflet in hand, the existence of a new and awesome color screen model that “did everything and some more”). And some days later I had said model in my hands, a Casio fx-CG 20, or Prizm, which had been released about a year before. The price difference from the earlier dot-matrix screen Casio calcs was too small to let the color screen go.
I was turning 15, or had just turned 15. I remember setting up the calculator and thinking, not much after, “I want to code for this thing”. Casio’s built-in Basic dialect is way too limited (and after having coded in “real” languages, Basic was silly). This was in September 2011; in March next year I would be releasing my first Prizm add-in, CGlock, a calculator PIN-locking software.
Minimalist look, yay! So much you don’t even notice it’s a color screen.
This was my first experience with C; I remember struggling with pointers, and getting lots of compilation warnings and errors, and run-time errors. Then at some point everything just “clicked in” and C soon became my main language. Alas, for developing native software for the Prizm, this is the only option (besides using C++ without most of its features, not even the “new” keyword).
The Prizm is a horrible platform, especially for newbie C programmers. You can’t use a debugger, nor look at memory contents, the OS malloc/free implementation has bugs (and the heap is incredibly small, compared to the stack) and there’s always that small chance some program damages your calculator, or at least corrupts your estimated files and notes. To this day, using valgrind and gdb on the desktop feels to me as science fiction made true. The use of alloca (stack allocation) ends up being preferred in relation to dynamic allocation, leading to awkward design decisions.
Example of all the information you can get about an error in a Prizm add-in. It’s up to you to go through your binary (and in some cases, disassemble the OS) to find out what these mean. Oh, the bug only manifests itself when compiling with optimizations and without symbols? Good luck…
There is a proprietary emulator, but it wasn’t designed for software development and can’t emulate certain things. At least it’s better than risking damage to expensive hardware. The SuperH-4 CPU runs at 58 MHz and add-ins have access to about 600 KiB of memory, which is definitely better than with classic z80-powered Texas Instruments calculators, but one still can’t afford memory- or CPU-intensive stuff. But what you gain in performance and screen resolution, you lose in control over the hardware and the OS, which still have lots of unknowns.
Programming for the Prizm taught me how it’s like to work without the help of the C standard libraries (or better, with the help of incomplete and buggy standard libraries), what a stack overflow looks like (when there’s no stack protection), how flash memories work, what DMA is, what MMUs do and how systems can be bricked when their only bootloader is not read-only. It taught me how compilers work from an end-user perspective, what kind of problems and advantages optimizations introduce, and what it’s like to develop parts of the C standard library.
It also taught me Casio support in Portugal (Ename) is pretty incompetent at fixing calculators, turning my CG 20 into a CG 10 and leaving two big capacitors out of a replacement main board. In this hardware topic, I learned quite a bit about digital logic from Prizm hardware discussions at Cemetech. And I had some contact with SH4 assembly and a glimpse into how to use IDA Pro. Thank you Casio for developing a system that works so well and yet is so broken in so many under-the-hood ways, and thank you Cemetech for briefly holding the Prizm higher than TI calcs.
I developed other add-ins, some from scratch and others as ports of existing PC software (such as Eigenmath). I still develop for the Prizm from time to time, but I have less and less motivation as the homebrew community has stagnated and I use my Prizm much less, as I went to university. Experience in obscure calculator platforms does not make for a nice CV.
Yes, in three years or so I went from the likes of Visual Studio to a platform where the only way to debug is to write text to the screen. I still like embedded and real-time programming a lot and have moved to programming more generic and well-known things such as the ESP8266.
Getting in the elevator
During the later part of high school (which I started in the fall of 2011 and ended in the summer of 2014), I did more serious Python stuff, namely Mersit, later deprecated in favor of Picored, which is not written in Python but in Go. Yes, I began trying higher-level stuff again (higher level, getting in the elevator… sorry, I’m bad at jokes).
My first contact with Go was when I was 17, because I wanted to develop something that ran without external dependencies (i.e., unlike Java or .NET) and compiled to native code. I wanted to avoid C/C++, but I wasn’t looking for “a better C” either, so Rust was not it. Seeing so much stuff about Go at Hacker News, one day I decided to try my hand at it and I like it quite a lot – I’m still unsure if I like it because of the language itself or because of the great libraries one can use with it, but I think both play an important role.
This summer I decided to give C# another chance and I’m quite impressed – turns out I like it much more than I thought. It may have something to do with trying it after learning proper languages vs. trying it when one only knows VB. I guess my VB.NET scars are healed. I also tried a bit of Java, in my first contact with it ever, and it seems my .NET hate converted into Android API hate.
Programming with grades
University gave the opportunity (or better, the obligation) of having other people criticize my code. The general public could already see the open-source C code of my Casio Prizm add-ins, and even the ugly code of Goona Browser, but this time my code was getting graded. It went better than I initially thought – I guess the years of experience programming in different languages helped, especially as many of the people I’m being compared with have only started programming this year.
In the first semester we took an introductory programming course, which used Python, and while it was quite easy for me, I took the opportunity to learn Python to a greater depth than “language in which to write quick and dirty glue code”. You see, until then I had not used classes in my Python code, for example. (This only goes to show Python is a versatile language, even if slow.)
We also took an introductory computer architecture course where we learned how basic CPUs work (it was good for gluing all the separate knowledge I already had about it) and programmed in assembly for a course-specifc CISC-like architecture. My previous experience with reading SH4 assembly proved quite useful (and it seems that nowadays the line between RISC and CISC is more blurred than ever).
In the second semester, I had the opportunity to exercise my C knowledge, this time not limited to the Prizm platform. More interestingly, logic programming, a paradigm I had no intention of ever programming in, was presented to us. So Prolog it was. It went much better than I anticipated, but as most other people who (are forced to) learn it, I have no real use for it. So the knowledge is there, waiting for The Right Problems(tm). I am afraid I’ll forget much of it before it becomes useful, but if there’s something picking C# up again taught me, is that I can pick up pretty fast skills learned and abandoned long ago.
The second year is about to begin and there’s some object-oriented programming coming, I hope I do well.
Summing it up
I have written non-trivial amounts of code in at least 8 languages: Visual Basic, PHP, C#, Python, Lua, C, Go, Java and Prolog. I have contacted with two assembly dialects and designed web pages with HTML, CSS and Javascript, and of course automated some tasks with bash or plain shell scripting. As can be seen, I’m yet to do any kind of functional programming.
I do not like “years of experience” as a way to measure language proficiency, especially when such languages are learned for use in short-lived side projects, so here’s a list with an approximate number of lines of code I have written in each language.
C: anywhere between 40K lines and 50K lines. Call it three years experience if you will. Most of these were for Prizm add-ins, and have since been rewritten or heavily optimized. This is changing as I develop less and less for the Prizm.
PHP: over 15K lines, two years if you want to think that way. The biggest chunk of these were for developing the additions to YOURLS used in tny.im, but every other small project takes its own 200-500 lines of code. Unfortunately, most of this is “bad” code, far from idiomatic. The usual PHP mess, you know.
Python: at least 5K lines over what amounts to about six months. Of these, most of the “clean” lines (25-35%) were for university projects.
Go: around 7K lines, six months. Not exactly idiomatic code, but it’s clean and works well.
VBA: uh, perhaps 3 or 4K lines, all bad code 🙂
VB.NET: 10K lines or so, most of it shoddy code with lots of Try…Catch to “fix” the problems. Call it two years experience.
C#: 10K lines of mostly clean and documented code. One month or so 🙂
Lua: mostly small glue scripts for my own amusement, plus some more lines for use in games such as Minetest, I estimate 3-4 K lines of varying quality.
Java: I just started, and mostly ported C# code… uh, one week and 1.5K lines?
HTML, CSS and JS: my experience with JS doesn’t go much beyond what’s needed to modify DOM elements and make simple AJAX requests. I’ve made the frontend for over 5 websites, using the Bootstrap and INK frameworks.
Prolog: a single university assignment, ~250 lines or one month. A++ impression, would repeat – I just don’t see what for.
In addition to all this, I have some experience launching the programs and services I make – designing logos/branding, versioning, keeping changelogs, update instructions, publishing, advertising, user support. Note that I didn’t say I’m good at any of these things, only that I have experience doing them, for better or worse…
Things I’d like to have more experience with:
Continuous integration / testing in general;
Debugging code outside of .NET/Visual Studio and printing debug lines in C;
Using Git and other VCS in big repos/repos with more people (I want to see those merge conflicts and commits to the wrong branch coming);
Server-side web development on something other than PHP and Go. And learning to use MVC frameworks, independently of the language;
C++ (and Java, out of necessity. Damned Android);
Game development. Actually, this is how many people start, but I’m so cool that I started by developing POS software 🙂
After yesterday’s popular post Windows 10 is unfinished, where I bashed said OS, today, I’m going to praise Windows 10 (where possible). This is so we can keep with the opinion diversity people are now accustomed to seeing on the Web, faithfully satisfying the thousands of Reddit and Hacker News users who can’t skip a beat on hot technology topics and especially, hot discussions on those topics.
A lot of people took my post as my definitive opinion on the matter and also as if I was telling some universal truths, and mistakenly concluded that I only had negative things to say about Microsoft’s latest big release. Others were saying I focused on the wrong problems; that the design issues were minor nitpicks, and effectively they are, when compared to the functionality problems (which I’m also having, but apparently that part was overlooked). My intention was not to write a fanboy post nor to start flamewars, and that’s the case with this post too.
Yesterday’s post was written from start to end on my Windows 10 tablet, without hardware keyboard (yes, it was painful, but not as much as it would have if using an Android tablet with similar characteristics), including screenshots and image editing (MS Paint FTW!). That’s not the case with today’s post, that was written with my laptop, because Microsoft is yet to issue an update to fix the virtual keyboard in Windows 10. The OS it is running doesn’t matter; let’s just say I’m writing this in MS-DOS 6.0’s edit.
Let the deserved Windows 10 appraisal start.
Upgrade process
I upgraded from Windows 8.1, before Microsoft decided it was ready for me to install it. Yes, I forced the download and installation process. I wanted to get it downloaded before the end of July, so that it would not count towards this month’s data cap. I wanted to get it installed because I thought it would have tons of updates to download in the first days (not the case), and also because I’m going to need this tablet operational by September when university classes begin, so I thought I better get used to it and point out all mistakes sooner rather than later.
Yes, I could have stayed for another year on 8.1 before losing the option to upgrade for free, but I’m also interested in developing Universal Apps, so here’s that.
Despite me rushing the update and the tablet having 32 GB of storage of which only 22 GB are for the Windows partition, the process went perfectly, and apparently I still have the option to go back to 8.1 if I wish (at the expense of only having 2 GB of free disk space on C:). All data and apps were kept, except f.lux, possibly because (as far as I could understand when uninstalling its remnants) it was installed in AppData (note that AppData is mostly kept, too, but f.lux in particular wasn’t).
From leaving Windows 8.1 to seeing Windows 10 desktop it took my tablet about a hour and half. The flash storage on it is not especially fast (definitely not a SSD), which probably explains why most people can do it in one hour.
All points taken into account, the upgrade process went surprisingly well and was fast, as appears to be the case with the majority of users. Much better than ending up with a system that doesn’t boot at all, or with driver issues (which some users are still having), which as far as I remember were popular problems in previous versions’ in-place upgrades. Also, kudos to Microsoft for making it work on devices with such a limited amount of system storage.
Initial setup
There was the first-run setup, where the polemic privacy defaults are located (I disabled almost everything), but the most complicated part is what comes when the system finishes installing. In my case, Windows understood this was a tablet and accordingly selected tablet mode automatically. Because on 8.1 I basically only used the desktop, and because I thought it would be easier to find most settings on the desktop mode, I immediately went looking for the switch and since then I have only used desktop mode.
The desktop mode still works very well with touch screens; I have gone back to tablet mode for five minutes just to check it out, but went back quite fast, as I deemed the desktop good enough. Tablet mode didn’t fix the problem of the touch keyboard appearing over other windows even when docked, which would have been its major selling point for me right now.
Windows 8’s modern apps were kept from the previous version, including the MSN-powered apps such as Travel, which have been discontinued and will stop working in September. Of course, those who have an Universal app replacement (Mail, Calendar, Twitter, Maps, possibly more) are replaced. In the case of Mail and Calendar, it remembered the previously added account, but I had to pair them again in the case of Google and Microsoft accounts, and re-insert credentials for IMAP accounts.
OneDrive apparently now refuses to have its folder out of the C: drive, or perhaps that’s only a problem when the folder you want to chose is on a removable drive. I solved this problem by mounting the SD card, where I had the OneDrive folder, on the C: drive (NTFS mountpoints FTW!), then pointing OneDrive to this mountpoint. Yes, I know what I’m doing and you should too. This SD card, unlike what Windows thinks, is never removed.
I also had to download desktop Skype. Before I was using the Modern UI version of Skype, which was discontinued some time ago. But the desktop version uses so much RAM and is less touchscreen friendly, making it one of the most annoying parts of my Windows 10 experience. It also doesn’t update with new messages during Connected Standby, which is a thing my tablet has and I’m going to talk about later, and it doesn’t put its notifications in the new Action Center, either.
Tablet usage
People are saying the tablet experience has actually gone worse with Windows 10, but to be honest if they fixed the touch keyboard I’d say it is as good as Windows 8. Of course, if you are used to the charms bar and to the gesture of “swiping down an app” to close it, you’ll be out of luck:
swiping from the top on a window does nothing except move or restore it (if it was maximized);
swiping from the left opens the Action Center (where some handy, more or less configurable shortcuts are located, so you won’t miss the “Settings” part of the charms bar);
swiping from the right shows the task view, where you can switch apps and desktops;
sadly there’s no longer a way to bring up a big clock, even when running full-screen stuff (games, videos…), something the charms bar was good for.
As it’s been widely reported, now Universal apps, Windows 8 apps and “normal” software made for the Win32 API all work together, with the same window borders and titles and showing on the same task lists. If only it had been this way since the beginning, Windows 8 would not have received so much negative critique and “Modern apps” could have actually been more used. Yes, I believe windows are adequate even for tablet devices (and not just by putting two windows side-by-side), and that is certainly one of Windows differentiating factors in the world of tablet OS.
Resource usage
I still can’t comment much on this part, because I’m having some issues with my Voyo A1 Mini that look not like Windows fault but driver problems. The “System” process (i.e., the NT kernel) is often using multiple MBs of RAM. I know I’m not the only user with this problem; there is at least one known bad network driver, but I don’t use it. I’ve also seen suggestions for disabling the network device usage service, but in my case that didn’t help. The result is that it always has 90-95% of physical memory used, and the commit charge at something like 3 GB of 3,9 GB.
I have also noticed search indexing stuff has gone more aggressive again on Windows 10, after being mostly quiet on 8.1 (as far as I could see). But since I haven’t done any serious monitoring, this could be just my impression.
The update could also have damaged the special CPU throttling set up for this device, given that it now runs much more hot than before, even for the same typical load. It appears the CPU (Intel Baytrail) works at higher frequencies more often – just a slight load and there it goes to 1,55 GHz or so (the “announced speed” of the CPU is 1,33 GHz). I have updated to the latest DPTF (Intel’s thermal stuff) drivers and it reduced the problem a bit, but it’s still present.
Now, this isn’t all that bad, given that Windows is very responsive even with the CPU at 75 degrees Celsius and 95% of the physical memory used. Let’s just wait for updates, both for Windows and for drivers, before taking more conclusions.
Connected Standby is still annoying
My tablet supports Connected Standby. On Windows 8, it was more or less like suspending the computer, but Windows Store apps could still run in the background to perform small tasks, and if you were playing media in such an app, it would keep playing even with the screen off – just like with Android devices.
The problem is if you want to use something other than a Windows Store app (read: 99,9% of the software available for Windows) to play music, or download files, or if you want to watch YouTube with something other than IE’s Modern UI mode. Windows will just suspend desktop apps and they will stop playing, or downloading, or crunching numbers. What makes this really annoying is that there is no way to turn off the screen without entering Connected Standby. So it’s burning extra battery and, at night, our eyes too.
In Windows 10, Connected Standby is more or less the same thing. I hoped that with Windows 10 they would add an option to be able to white-list certain “old fashioned” (Win32) apps into running during connected standby, or alternatively, a way to turn off the screen without going into standby.
At least, the “Sleep” and “Turn off the screen” settings now seem a bit better decoupled, and with my current settings (turn off screen after 2 minutes, sleep after 4) there is a bigger delay between when the screen turns off and the music stops playing. During this delay one can tap the screen and it will turn back on, instantly. Just like with a normal laptop that turns off the screen after a while. Let’s just hope Microsoft doesn’t consider this to be a bug and doesn’t “fix” it.
Cortana
I can’t comment much on the Cortana feature itself, but I can comment on the stuff surrounding Cortana and whether the feature is enabled or not. Here, Windows is set up with a system language of US English. The region was set to Portugal, and the time and date and formatting settings to Portuguese. I was told by a friend I had to set my region to US for Cortana to become available, and that’s indeed true.
I just don’t understand, if Cortana is going to speak in English anyway (because that’s the system language), why does it have anything to do with the region. Unless it is expecting to change the language it uses depending on the region setting, and not depending on the language I want to see (and hear) stuff in. Oh well.
Finally, I have watched Cortana tell me how awesome are all the things that can be done with this feature, but I didn’t enable it because of the privacy policy, and I don’t think I’d use the functionality enough to be worth yet another “I agree” on a privacy setting. I can always turn it on later.
Feedback
Microsoft seems really interested in listening to what the users have to say, so there’s a dedicated feedback app and everything. Unfortunately, this app filters content by region instead of filtering by language, which limits what reviews you can see and upvote. I wonder if anyone from Microsoft will look at the feedback of less populous countries like the one I live in, and even smaller ones.
Microsoft also seems really interested in learning how people use the OS, so much that only Enterprise users can completely disable this kind of feedback. Privacy concerns aside, I really hope the data generated with these feedback tools won’t be used as motivator or justification for taking away even more features and customization ability.
Rolling release
I always wanted to move to a rolling release Linux distro, but I’m yet to make the move; it appears I switched to a rolling Windows release before I did the same with Linux! I actually think it is a very good idea to stop releasing major versions and put new things out in a more continuous way. Major upgrades are a hassle, even when the upgrading itself takes just one hour – first, a giant download, then having to wait while the Windws upgrades and reboots multiple times, then having to set so many little settings that are new or changed with the new version…
I would be even happier if every user had the ability to refuse or at least delay certain updates (even because of, say, known driver and software incompatibility issues). The way things are done right now, only makes the whole thing look like a giant Microsoft-controlled botnet and by paving the way to Windows-as-a-service, makes people fear a future where you’ll pay for Windows by the month (and perhaps by the window/app/user?).
Finally, it’s about time Microsoft finds an ingenious way around the way file handles work in Windows, such that system files can be replaced without rebooting the system. Or at least, they could make the reboots less disrupting, for example by “suspending” the apps before the reboot, then restoring them.
Conclusion
My conclusion is to sit and wait. Windows 10 is actually pretty good for what feels like the end result of a development cycle damaged by setting a release date way too early. It should have been ready when it was ready, but I understand Microsoft not wanting to deal with another “XP to Vista” situation, where it took five years to release a new OS version with an abandoned revolutionary version in between, and a shitty end result. This way, the most people can say is that it’s shitty, but at least it came on time.
If you are using Windows 7 on a desktop and are happy with it, or using 8.1 on a tablet, I don’t think you have much to gain by upgrading now, unless you desperately want to use Cortana. People using Windows 8.1 without a touchscreen may find more value in upgrading now, especially if they use Modern UI apps and are annoyed by the context switches between them and the desktop.
Anyway, I always wanted to try Longhorn in its unstable and unpolished state, and now here is an opportunity – not with Longhorn, but with another revolutionary Windows version that while stable, has its own big polishing needs. But we already talked about that…
Windows 10 came out some hours ago, and, surprise surprise, it’s unfinished! I can’t complain about the system stability (even though the Windows Reliability History tells me there have been some errors happening in the background), but the RAM usage has gone up when compared to 8.1. On a device with just 2 GB of RAM, this matters, but not nearly as much as what’s coming next…
What’s worse is really the touch experience – ruined, compared to 8.1. Imagine the touch keyboard no longer docks properly, which means 90% of the time the cursor is behind the keyboard, and I can’t see what I’m writing (I can’t believe nobody complained about this in the previews!). Then there’s the ultra-invasive privacy settings defaulting to on, which I disabled on the first run setup, but apparently, some choices were ignored – for example, I disabled error reporting, and when later I went to check, found it enabled in its highest level.
Windows 10 still suffers from many of the problems of Windows 8 in terms of UI inconsistency. The void between the “modern” UI and the classic desktop is greatly reduced, with Modern apps and Universal apps running windowed just like all other software. But things are far from perfect.
Microsoft didn’t quite manage to get rid of legacy design paradigms, and the OS still speaks at least three different design languages: if you look carefully, you’ll see elements that would fit better in Windows 7, others that are the continuation of the “modern UI” design, and things that would really fit better in XP and earlier (like the small, tabbed setting dialogs reachable from the legacy Control Panel).
There are still two control panels, with certain things only accessible in one of them, and others available in both but with different names for the same thing (or the same thing, but negated, as is the case with screen rotation lock – in some places, “on” means “do not rotate”; in others it means “allow rotation”).
At least, there are now some more links between the two settings panels, but sometimes Windows will just tell you “This setting is now on …” without actually taking you there.
Depending on where you right-click (and, for certain things, how the planets are aligned) you can open at least four different styles of context menu.
Both Windows 8 and 8.1 were, even despite their messy paradigms and inconsistent styles, more polished in terms of looks than Windows 10. Windows 10 has an incomplete icon set, with many icons yet to be updated to the new design. The fact that the icons are very different from those of 7 and 8 (the icon change from 7 to 8 was much more subtle) only makes the problem worse. You really don’t need much effort to find icons yet to be updated.
Leaving design aside, we can see that they tried to remove some functionality, like Windows Update, from the legacy Control Panel. But the migration transmits a feeling of incompleteness:
Many settings are duplicated in the Settings app and in the Control Panel. But it’s often not a 1:1 relation: to uninstall modern apps, for example, you must go through the Settings app. Going through the old Programs and Features won’t show these apps.
Certain things were renamed – the “Action Center” is the new notification center of Windows 10 (which is a really appropriate name, and what the Action Center should have been since the beginning). If you are looking for the old thing, it still exists:
There are at least two ways to add devices, with different UI flows. Also note the lack of padding on the icon of the window to the right:
The sometimes useful Math Input Panel is still stuck in the past of Windows Vista or 7, with obvious readability problems in the menu:
Then there are gems like this dialog, that depending on from where it is opened, shows different items (possibly not exclusive to Windows 10):
The first non-preview release of Windows 10 still contains too many rough edges and suffers from a lack of attention to detail I was only used to seeing in older Windows’ preview releases. I say “first non-preview release”, because as Microsoft is switching to a rolling release model, it no longer makes much sense to call this a “final release”.
Intentionally or not, Microsoft pushed the quality assurance process to the final user. For what is supposedly the best Windows ever made, I’m not impressed. Thank God I didn’t pay for it (even though it’s for sale, and it’s not cheap).
On 24th June last year, Version 1.4 of my Utilities add-in for the Casio Prizm calculators was released. The plan was for this to be final release of said software, with any further versions being bug-fixing only, and because of this, it was even more thoroughly tested than previous stable releases.
Ironically enough, an apparently innocent code optimization, introduced at a late development stage, introduced a bug in the Tasks functionality of the add-in, where a reference to a nonexistent memory object may happen when there are no tasks. At this point, I was more or less tired of the Casio Prizm platform, because of the manyissues I have described throughout the years, and which the homebrew development community is yet to fully solve. However, as time went by, occasionally I’d look into my Prizm projects and I’d inevitably end up optimizing yet another function, or adding another small functionality.
This, plus the desire to iron out some edges, led to the discovery of another bug, this time in the calendar search function. After lengthy debugging sessions it turned out to be a buffer overflow issue that could happen when reading malformed calendar database entries. Fixes for these and other bugs, plus the functionality I added as I had time and will, made it clear that releasing a new version of Utilities was imperative. I often ask myself if continuing the development of such project is still worth it, since:
I use my Prizm much, much less than I used to (finishing high school marked the end of the period in my life where graphic calculators were needed for education);
The community of users of these calculators was never very big, and keeps on shrinking. Of the people who go on online communities dedicated to these calculators, some lost interest on the device, and others lost the device to a brick, for which nobody is able to pinpoint a certain cause. Taking into account the results of my survey so far, the intersection between the group of people who own a Prizm and the group of people who search for software for it, seems to be contain no more than 50 people;
Of the people who remain in the communities, most never paid much attention to Utilities (due to feature creep, it’s likely that most people never understood its power) and the amount of users that still pay attention has reduced too (as well as their attention span for it).
Apparently, at least one hundred thousand of these devices are produced every month, but the amount of users who know they can run extra software in them, is in the order of the few dozens.
Despite all this, such questions are promptly answered by the fact that I still have fun developing it, even if nobody gets to use my work. And so development progresses, albeit at a much more relaxed rhythm, firstly because v1.4 is still very stable (at least, no one complained), and secondly because there is no roadmap to v1.5 nor planned release date. Heck, if I wanted to, I could not release it, and zero people would complain… but perhaps not after seeing what’s coming.
On the video below (without sound), I show a small subset of the new functionality for v1.5 (if it ever gets released, heh heh). The part that, in my opinion, is going to leave the mouths of some people open, starts at 3:30. It is an elaborate method to allow people to extend Utilities to a certain point, by having an easy way to use the big amount of utility functions used internally, as well as the nice GUI methods I developed. As if this wasn’t enough, one still gets access to most known syscalls (those that involve function pointers being the notable omission). What’s presented is, after all, the most powerful scripting engine ever made to run on the Prizm, and because of this one gets goodies like on-calculator development.
As hinted in the video, “PicoC script execution available on select builds only”. Starting with version 1.5 of Utilities, there will be two public builds made available: the normal one, with the now usual feature set plus the added features but without PicoC, and another with all that plus PicoC support enabled. The reason for this, is that such support increases the size of the add-in by at least 60 KiB, and as can be seen in the video above, the scripts have (almost) full reign on the machine, including read/write access to the whole address space (in the video, you can see a script changing the function key color, and while it’s not depicted, it also locks and unlocks Main Menu access). This means that a script can definitely brick a calculator on purpose, and do all the sorts of nasty (and good) things an add-in can do, except use syscalls with function pointers (the reason being, that PicoC doesn’t support them). It’s understandable that not everyone wants to have such a thing installed on the calculator, hence the limited builds.
PicoC is not especially fast, but definitely fast enough for many applications. It is also riddled with bugs, and even things as simple as the scope of variables appear to have bugs. Adding the differences between PicoC and the C90 standard it aims to run, expecting to write C code with the same kind of ease (if it was ever easy, especially after using newer C standards or C++) as when using a fully featured compiler is certainly unrealistic. Still, I hope my PicoC port will constitute an interesting alternative to the never-finished LuaZM and to the Casio BASIC interpreter that comes with the OS.
Regarding the other changes seen on the video, there’s the rearrangement of menus on the home screen. The tools menu now hosts a balance manager, with support for multiple wallets, and it will also host a password generator. The old tools menu has been moved to the “Memory & System” menu on the F5 key.
That’s nice and all, but for when?
I don’t have an answer to that. With v1.5 I would like to include even more features than what I have added so far, namely a proper text editor. Such an editor is being developed by ProgrammerNerd / ComputerNerd, who, just like me, doesn’t always have much free time to work on such things. So I’m patiently waiting, and you should too. Meanwhile, feel free to ask any questions, request features (please be reasonable, and I don’t promise anything) or request development builds for a sneak peek.