Archive

Posts Tagged ‘computers’

Recent Articles about Ongoing Crapification of Personal Computing

December 3, 2020 20 comments

While browsing the intertubes in past few weeks, I came across a few articles about ongoing crapification of personal computing. As you know, this is interesting to me since I am also writing a short series about the computing “revolution of past two decades has been a showy failure. Hope to finish the next part in that series sometime soon. But till then, have a look at these posts by other people making similar observations.

Bring back the ease of 80s and 90s personal computing

Back in time when things were easy: You could opt into purchasing major (feature) upgrades every 2–3 years, and got minor (quality) updates for free very infrequently (say, 1–2 times a year). You made a conscious decision whether and when to apply upgrades or updates, and to which applications. You usually applied updates only if there was a specific reason (e.g., a feature you wanted or a bug you were running into and needed to be fixed). Systems typically ran on the exact same software configuration for months if not years.

Contrast this with today: Systems increasingly become “moving targets” because both the operating system and the applications change by updating themselves at will, without conscious decisions by the user. The absolute perversion of this are “forced automatic updates” as are common in some organizations, where users have no choice but to accept that updates are installed on the machine (even requiring reboots of the machine) whenever some central system administrator decides that it is time to do so.

Computer latency: 1977-2017

It’s a bit absurd that a modern gaming machine running at 4,000x the speed of an apple 2, with a CPU that has 500,000x as many transistors (with a GPU that has 2,000,000x as many transistors) can maybe manage the same latency as an apple 2 in very carefully coded applications if we have a monitor with nearly 3x the refresh rate. It’s perhaps even more absurd that the default configuration of the powerspec g405, which had the fastest single-threaded performance you could get until October 2017, had more latency from keyboard-to-screen (approximately 3 feet, maybe 10 feet of actual cabling) than sending a packet around the world (16187 mi from NYC to Tokyo to London back to NYC, more due to the cost of running the shortest possible length of fiber).

On the bright side, we’re arguably emerging from the latency dark ages and it’s now possible to assemble a computer or buy a tablet with latency that’s in the same range as you could get off-the-shelf in the 70s and 80s. This reminds me a bit of the screen resolution & density dark ages, where CRTs from the 90s offered better resolution and higher pixel density than affordable non-laptop LCDs until relatively recently. 4k displays have now become normal and affordable 8k displays are on the horizon, blowing past anything we saw on consumer CRTs. I don’t know that we’ll see the same kind improvement with respect to latency, but one can hope.

Things are so bad that a google search for ‘why is windows 10 so bad‘ yields hundreds of results, including long discussions threads on multiple subreddits and official Microsoft support newsgroups. You can get almost the same number of hits for asking ‘why is office 365 so bad. And it is not just Microsoft as you can find similar opinions past few iterations of Mac OS X and iOS. In case you are wondering, Android has always been a shitshow, though it is a little better than the older versions. Did I mention that even widely used google services such as Google Maps and web version of Gmail has become significantly worse and inconsistent over past few years. And then there are the numerous poorly executed design updates by Amazon, FakeBook, Twatter, InstaCrack etc. My point is that is an industry-wide phenomena.

What do you think? Comments?