Hacker News new | past | comments | ask | show | jobs | submit login

My opinion about this is that yes, we lost our way, and the reason is very simple, it is because we could. It was the path of least resistance, so we took it.

Software has been freeriding on hardware improvements for a few decades, especially on web and desktop apps.

Moore's law has been a blessing and a curse.

The software you use today was written by people who learned their craft while this free-ride was still fully ongoing.




The thing that makes me crazy is that the thing that we do on computers are basically the same each year, yet software are more and more heavy. For example just in 2010 a Linux distribution with a DE just started did consume 100Mb of RAM, an optimized version 60Mb of RAM. I remember it perfectly. I had 2Gb of RAM and did not have even a swap partition.

Now just a decade later, a computer with less than 8Gb of RAM is unusable. A computer with 8Gb of RAM is barely usable. Each new software uses Electron and consumes roughly 1Gb of RAM minimum! Browsers consume a ton of RAM, basically everything consumes an absurd amount of memory.

Not talking about Windows, I don't even know how people can use it. Every time I help my mother with the computer is so slow, and we talk about a recent PC with an i5 and 8Gb of RAM. It takes ages to startup, software takes ages to launch, it takes 1 hour if you need to do updates. How can people use these system and not complain? I would throw my computer out of the window if it takes more than a minute to boot up, even Windows 98 was faster!


Think also about all the finished stand-alone applications which have been discarded because of replacement APIs, or because they were written in assembly. We had near-perfect (limited feature-wise from a 3-decade view, of course) word processors, spreadsheets, and single-user databases in the late 80s/early 90s which were, except for many specific use-case additions, complete & only in need of regular maintenance & quality-of-life updates were there a way to keep them current. They were in many cases far better quality & documented than almost any similar applications you can get your hands on today; so many work-years done in parallel, repeated, & lost. If there wouldn't be software sourcing & document interchange issues, it would be tempting to do all my actual office-style work on a virtual mid-90s system & move things over to the host system when printing or sending data.

Addition: consider also how few resources these applications used, & how they, if they were able to run natively on contemporary systems, would have minuscule system demands compared to their present equivalents with only somewhat less capability.


> limited feature-wise from a 3-decade view

Outside gaming, ai and big data, aka things for instance my parents don’t use at all, what limited feature wise? Browsers, sure, however my father prefers Teletext and newsgroups and Viditel (doesn’t exist anymore but he mentions it quite a lot) over ad infested slow as pudding websites. Email didn’t change since the 90s, word processors changed but not with stuff most people use (I still miss WP; it was just better imho; I went over to Latex because I find Word a complete horror show and that didn’t change), spreadsheets are used by pros and amateurs alike as a database mostly for making lists; nothing new there. You can go on and on; put an average user behind a 80s/90s pc (arguably after win95 release; DOS was an issue for many and 3.1 was horrible; or Mac OS) and they will barely notice the difference. Except for the above list of ai, big data, gaming and most importantly, browsers. Ai is mostly an api so that can be fixed (I saw a c64 openai chat somewhere) , big data is a very small % of humanity using that and gaming, well, depends what you like. I personally hate 3d games; I like 80s shmups and most people who game are on mobile playing cwazy diamonds or whatnot which I can implement on an msx 8 bit machine from the early 80s. Of course the massive multiplayer open world 3d stuff doesn’t work.

Anyway; as I said before here responding to what software/hardware to use for their parents; whenever someone asks me to revive their computer, I install Debian with i3 wm and dillo and ff as browser, Libreoffice and thunderbird. It takes a few hours to get used to but people (who are not in IT or any other computer fahig job) are flabbergasted by the speed, low latency and battery life. I did an x220 (with 9 cell) install last week; from win xp to the above setup; battery life jumped from 3 to 12 hours and everything is fast.

I install about 50 of those for people in my town throughout the year; people think they depend on certain software, but they really usually don’t. If they do, most things people ask for now work quite well under Wine. I have a simple script which starts an easy ‘Home Screen’ on i3 with massive buttons of their favourite apps which open on another screen (1 full screen per screen); people keep asking why Microsoft doesn’t do that instead of those annoying windows…


Your sentiment is probably shared by many dusting off old systems and going back to first principles. SerenityOS is one example.


It's because a lot of it is fashion, doesn't matter if you have an old working shirt, need new shirt.


Windows 98 was often running on fragmented disks. I recall it taking minutes before I could do useful work. And having multiple apps open at once was more rare. While possible it often ended in crashes or unusable slowness.


Experienced same, it was faster to not multitask, do one thing a time. You would think launching 2 tasks would take 2x time with same resources, but it felt more like 3-4x. Disk was 1GB back then. I blame it on disk seek times and less advanced IO scheduling.


> The thing that makes me crazy is that the thing that we do on computers are basically the same each year

I think that is some kind of fallacy. We are doing the same things but the quality of those things is vastly different. I collect vintage computers and I think you'd be surprised how limited we were while doing the same things. I wouldn't want to go back.

Although I will say your experience with Windows is different than mine. On all my machines, regardless of specs, start up is fast so the point where I don't even think about it.


I have a Macintosh Plus, SE, 7200, and iMac G3 (System 6, 6, 7, 9) I've been using for fun lately after fixing many of them up. Even with real SCSI harddrives in the SE, 7200, and iMac, they're such a joy to use compared to a modern OS. Often much more responsive, UI is always more consistent, not to mention better aesthetics. They really don't make software like they used to. A web browser or OS should not be slow on any modern hardware but here we are.


System 7 runs so fast in BasiliskII on an old Atom netbook. I recently saw a video showing System 6 running in an emulator on an ESP32 microcontroller on an expansion card in an Apple II. It was substantially faster than the Mac Plus it was emulating. It really takes seeing this kind of thing to understand the magnitude of the problem.


My daily runner is a T400 Laptop with 4GB RAM on a fairly slim Linux distro. But in the last 6-12 months it is starting to feel a little tight when it comes to anything web browsing. Even things like Thunderbird are getting very bulky in keeping up with web rendering standards.

I pulled down an Audiobook player the other day, once all dependencies were meet, it need 1.3GB to function! At least VLC is still slim.


I think there is a thing about starting to boycott overly heavy websites.

There are some useful resources: https://greycoder.com/a-list-of-text-only-new-sites/

There are also some tricks to have a lighter web browsing by default:

- try websites with netsurf, links or w3m first

- using a local web to gemini proxy to browse many websites with a lightweight gemini browser.

And you can go a long way by using an adblocker and/or disabling javascript by default using an extension with a toggle.


Not discounting your lament about memory use, this caught my eye:

> I would throw my computer out of the window if it takes more than a minute to boot up, even Windows 98 was faster!

Sure, Windows has grown a lot in size (as have other OSes). But startup is typically bounded by disk random access, not compute power or memory (granted, I don't use Windows, if 8GB is not enough to boot the OS then things are much worse than I thought). Have you tried putting an SSD in that thing?

(And yes, I realise the irony of saying "just buy more expensive hardware". But SSDs are actually really cheap these days.)


But that is true. My laptop with windows, i7, nvme, 32gb ram now feels the same as my old laptop with i7, SSD and 16gb ram did 7 years ago.

Bloat ware everywhere, especially browsers.


A brand new mid-range business PC is not as snappy as they were brand new 20 years ago with XP.

And that was on an IDE HDD, with memory speed, processor speed and cores a fraction of today, and 512MB of graphics memory or less.


This whole thread needs a huge amount of salt and some empirical examples. I think if you compared side-by-side it’d be different. I remember my upgrade from 2019 MacBook to M1, when every single task felt about 50% faster. Or from swapping a window laptop’s HDD with an SSD. (Absolutely massive performance improvement!) Waiting forever for older windows computers to boot, update, index or search files, install software, launch programs, etc. Waiting ages for an older iMac to render an iMovie timeline.

Others in the thread talking about the heyday of older spreadsheet and document programs that were just as fast. So? I bet you could write a book on the new features and more advanced tools that MS Excel offers today compared to 1995.

We went from things taking minutes to taking seconds. So you could improve things by 50% and that could be VERY noticeable. (1min to 30s, for example.) If your app already launches in 500ms, 250ms is not going to make your laptop feel 2x faster even if it is. On top of that, since speed has been good enough for general computing for several years now, new laptops focus more on energy efficiency. I bet that new laptop has meaningfully better battery and thermal performance!


> I bet you could write a book on the new features and more advanced tools that MS Excel offers today compared to 1995.

I'm sure you could, but it would be of interest to a relatively small audience. Excel 95 would be fine for about 90% of Excel users.


How advanced is excel now comparing with 2016 version?

New expensive laptop had the same "fast" feeling which fade with new iterations of software. Browser takes insane amount of CPU and memory but isn't faster.

Maybe some intense CPU tasks like zipping folder is faster then ever, but I'm not zipping all day. However Slack is behaving like there is server side remote rendering for each screen...


If you keep your software up to date, every hardware upgrade will feel like a significant improvement. But you're comparing the end of one hardware cycle to the beginning of the next. You regain by upgrading what you previously lost to gradual bloat.


I think Windows taking 1 minute on SSDs is typical, and it takes like 40 if you want to use a spinning magnet


Most of my windows PC's boot time happens before my computer even starts loading the OS. If I enabled fast boot in my bios, I'm pretty sure my PC would boot in around 15 seconds.


Back in my day websites didn't have "dark mode" and we liked it. We didn't trust the compiler to do our optimizations in the snow (both ways). etc.


Back in my day there was only "dark mode" and we liked it.


Blame surveillance capitalism for a lot of this. All those hundreds (thousands?) of trackers running simultaneously add up.


> It was the path of least resistance, so we took it.

Well said. I believe many of the "hard" issues in software were not "solved" but worked around. IMO containers are a perfect example. Polyglot application distribution was not solved, it was bypassed with container engines. There are tools to work AROUND this issue, I ship build scrips that install compilers and tools on user's machines if they want but that can't be tested well, so containers it is. Redbean and Cosmopolitan libc are the closest I have seen to "solving" this issue

It's also a matter of competition, if I want users to deploy my apps easily and reliably, container it is. Then boom there goes 100mb+ of disk space plus the container engine.


It's very platform specific. MacOS has had "containers" since switching to NeXTStep with OS X in 2001. An .app bundle is essentially a container from the software distribution PoV. Windows was late to the party but they have it now with the MSIX system.

It's really only Linux where you have to ship a complete copy of the OS (sans kernel) to even reliably boot up a web server. A lot of that is due to coordination problems. Linux is UNIX with extra bits, and UNIX wasn't really designed with software distribution in mind, so it's never moved beyond that legacy. A Docker-style container is a natural approach in such an environment.


Is it? I'm using LXC containers, but that mostly because I don't want to run VMs on my devices (not enough cores). I've noted down the steps to configure them if I ever have to redo it so I can write a shell script. I don't see the coordination problem if you choose one distro as your base and then provision them with shell scripts or ansible. Shipping a container instead of a build is the same as building desktop apps instead of electrons, optimizing for developer time instead of user resources.


> if you choose one distro as your base

Yes obviously if you control the whole stack then you don't really need containers. If you're distributing software that is intended to run on Linux and not RHEL/Ubuntu/whatever then you can't rely on the userspace or packaging formats, so that's when people go to containers.

And of course if part of your infrastructure is on containers, then there's value in consistency, so people go all the way. It introduces a lot of other problems but you can see why it happens.

Back in around 2005 I wasted a few years of my youth trying to get the Linux community on-board with multi-distro thinking and unified software installation formats. It was called autopackage and developers liked it. It wasn't the same as Docker, it did focus on trying to reuse dependencies from the base system because static linking was badly supported and the kernel didn't have the necessary features to do containers properly back then. Distro makers hated it though, and back then the Linux community was way more ideological than it is today. Most desktops ran Windows, MacOS was a weird upstart thing with a nice GUI that nobody used and nobody was going to use, most servers ran big iron UNIX still. The community was mostly made up of true believers who had convinced themselves (wrongly) that the way the Linux distro landscape had evolved was a competitive advantage and would lead to inevitable victory for GNU style freedom. I tried to convince them that nobody wanted to target Debian or Red Hat, they wanted to target Linux, but people just told me static linking was evil, Linux was just a kernel and I was an idiot.

Yeah, well, funny how that worked out. Now most software ships upstream, targets Linux-the-kernel and just ships a whole "statically linked" app-specific distro with itself. And nobody really cares anymore. The community became dominated by people who don't care about Linux, it's just a substrate and they just want their stuff to work, so they standardized on Docker. The fight went out of the true believers who pushed against such trends.

This is a common pattern when people complain about egregious waste in computing. Look closely and you'll find the waste often has a sort of ideological basis to it. Some powerful group of people became subsidized so they could remain committed to a set of technical ideas regardless of the needs of the user base. Eventually people find a way to hack around them, but in an uncoordinated, undesigned and mostly unfunded fashion. The result is a very MVP set of technologies.


> A lot of that is due to coordination problems.

The dumpster fire at the bottom of that is libc and the C ABI. Practically everything is built around the assumption that software will be distributed as source code and configured and recompiled on the target machine because ABI compatibility and laying out the filesystem so that .so's could even be found in the right spot was too hard.


To quote Wolfgang Pauli, this is not just not right, it's not even wrong ...

The "C ABI" and libc are a rather stable part of Linux. Changing the behaviour of system calls ? Linus himself will be after you. And libc interfaces, to the largest part, "are" UNIX - it's what IEEE1003.1 defines. While Linux' glibc extends that, it doesn't break it. That's not the least what symbol revisions are for, and glibc is a huge user of those. So that ... things don't break.

Now "all else on top" ... how ELF works (to some definition of "works"), the fact stuff like Gnome/Gtk love to make each rev incompatible to the prev, that "higher" Linux standards (LSB) don't care that much about backwards compat, true.

That, though, isn't the fault of either the "C ABI" or libc.


Things do break sadly, all the time, because the GNU symbol versioning scheme is badly designed, badly documented and has extremely poor usability. I've been doing this stuff for over 20 years now [1] [2], and over that time period have had to help people resolve mysterious errors caused by this stuff over and over and over again.

Good platforms allow you to build on newer versions whilst targeting older versions. Developers often run newer platform releases than their users, because they want to develop software that optionally uses newer features, because they're power users who like to upgrade, they need toolchain fixes or security patches or many other reasons. So devs need a "--release 12" type flag that lets them say, compile my software so it can run on platform release 12 and verify it will run.

On any platform designed by people who know what they're doing (literally all of the others) this is possible and easy. On Linux it is nearly impossible because the entire user land just does not care about supporting this feature. You can, technically, force the GNU ld to pick a symbol version that isn't the latest, but:

• How to do this is documented only in the middle of a dusty ld manual nobody has ever read.

• It has to be done on a per symbol basis. You can't just say "target glibc 2.25"

• What versions exist for each symbol isn't documented. You have to discover that using nm.

• What changes happened between each symbol isn't documented, not even in the glibc source code. The header, for example, may in theory no longer match older versions of the symbols (although in practice they usually do).

• What versions of glibc are used by each version of each distribution, isn't documented.

• Weak linking barely works on Linux, it can only be done at the level of whole libraries whereas what you need is symbol level weak linking. Note that Darwin gets this right.

And then it used to be that the problems would repeat at higher levels of the stack, e.g. compiling against the headers for newer versions of GTK2 would helpfully give your binary silent dependencies on new versions of the library, even if you thought you didn't use any features from it. Of course everyone gave up on desktop Linux long ago so that hardly matters now. The only parts of the Linux userland that still matter are the C library and a few other low level libs like OpenSSL (sometimes, depending on your language). Even those are going away. A lot of apps now are being statically linked against muslc. Go apps make syscalls directly. Increasingly the only API that matters is the Linux syscall API: it's stable in practice and not only in theory, and it's designed to let you fail gracefully if you try to use new features on an old kernel.

The result is this kind of disconnect: people say "the user land is unstable, I can't make it work" and then people who have presumably never tried to distribute software to Linux users themselves step in to say, well technically it does work. No, it has never worked, not well enough for people to trust it.

[1] Here's a guide to writing shared libraries for Linux that I wrote in 2004: https://plan99.net/~mike/writing-shared-libraries.html which apparently some people still use!

[2] Here's a script that used to help people compile binaries that worked on older GNU userspaces: https://github.com/DeaDBeeF-Player/apbuild


> How to do this is documented only in the middle of a dusty ld manual nobody has ever read.

This got an audible laugh out of me.

> Good platforms allow you to build on newer versions whilst targeting older versions.

I haven't been doing this for 20 years (13), but I've written a fair amount of C. This, among other things, is what made me start dabbling with zig.

  ~  gcc -o foo foo.c
  ~  du -sh foo
  16K foo
  ~  readelf -sW foo | grep 'GLIBC' | sort -h
       1: 0000000000000000     0 FUNC    GLOBAL DEFAULT  UND __libc_start_main@GLIBC_2.34 (2)
       3: 0000000000000000     0 FUNC    GLOBAL DEFAULT  UND puts@GLIBC_2.2.5 (3)
       6: 0000000000000000     0 FUNC    GLOBAL DEFAULT  UND __libc_start_main@GLIBC_2.34
       6: 0000000000000000     0 FUNC    WEAK   DEFAULT  UND __cxa_finalize@GLIBC_2.2.5 (3)
       9: 0000000000000000     0 FUNC    GLOBAL DEFAULT  UND puts@GLIBC_2.2.5
      22: 0000000000000000     0 FUNC    WEAK   DEFAULT  UND __cxa_finalize@GLIBC_2.2.5
  ~  ldd foo                                 
    linux-vdso.so.1 (0x00007ffc1cbac000)
    libc.so.6 => /usr/lib/libc.so.6 (0x00007f9c3a849000)
    /lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x00007f9c3aa72000)


  ~  zig cc -target x86_64-linux-gnu.2.5 foo.c -o foo
  ~  du -sh foo
  8.0K  foo
  ~  readelf -sW foo | grep 'GLIBC' | sort -h        
       1: 0000000000000000     0 FUNC    GLOBAL DEFAULT  UND __libc_start_main@GLIBC_2.2.5 (2)
       3: 0000000000000000     0 FUNC    GLOBAL DEFAULT  UND printf@GLIBC_2.2.5 (2)
  ~  ldd foo                                 
    linux-vdso.so.1 (0x00007ffde2a76000)
    libc.so.6 => /usr/lib/libc.so.6 (0x0000718e94965000)
    /lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x0000718e94b89000)

edit: I haven't built anything complicated with zig as I have with the other c build systems, but so far it seems to have some legit quality of life improvements.


Interesting that zig does this. I wonder what the binaries miss out on by defaulting to such an old symbol version. That's part of the problem of course: finding that out requires reverse engineering the glibc source code.


Maybe just nitpicking but he _specified_ the target version for the zig compile.

(Haven't tested what it would link against where that not given)


> Maybe just nitpicking but he _specified_ the target version for the zig compile.

Right, but I was able to do it as a whole. I didn't have to do it per symbol.


Thanks for extensive examples of "the mess"...

I'd only like to add one thing here ... on static linking.

It's not a panacea. For non-local applications (network services), it may isolate you from compatibility issues, but only to a degree.

First, there are Linux syscalls with "version featuritis" - and by design. Meaning kernel 4.x may support a different feature set for the given syscall than 5.x or 6.x. Nothing wrong with feature flags at all ... but a complication nonetheless. Dynamic linking against libc may take advantage of newer features of the host platform whereas the statically linked binary may need recompilation.

Second, certain "features" of UNIX are not implemented by the kernel. The biggest one there is "everything names" - whether hostnames/DNS, users/groups, named services ... all that infra has "defined" UNIX interfaces (get...ent, get...name..., ...) yet the implementation is entirely userland. It's libc which ties this together - it makes sure that every app on a given host / in a given container gets the same name/ID mappings. This does not matter for networked applications which do not "have" (or "use") any host-local IDs, and whether the DNS lookup for that app and the rest of the system gives the same result is irrelevant if all-there-is is pid1 of the respective docker container / k8s pod. But it would affect applications that share host state. Heck, the kernel's NFS code _calls out to a userland helper_ for ID mapping because of this. Reimplement it from scratch ... and there is absolutely no way for your app and the system's view to be "identical". glibc's nss code is ... a true abyss.

Another such example is (another "historical" wart) timezones or localization. glibc abstracts this for you, but language runtime reimplementations exist (like the C++2x date libs) that may or may not use the same underlying state - and may or may not behave the same when statically compiled and the binary run on a different host.

Static linking "solves" compatibility issues also only to a degree.


glibc is not stable on Linux. Syscalls are.


glibc is ABI-compatible in the forward direction.


https://cdn.kernel.org/pub/software/libs/glibc/hjl/compat/

It's providing backwards compatibility (by symbol versioning). And that way allows for behaviour to evolve while retaining it for those who need that.

I would agree it's possibly messy. Especially if you're not willing or able to change your code providing builds for newer distros. That said though... ship the old builds. If they need it only libc, they'll be fine.

(the "dumpster fire" is really higher up the chain)


> Practically everything is built around the assumption that software will be distributed as source code

Yup, and I vendor a good number dependencies and distribute source for this reason. That and because distributing libs via package managers kinda stinks too, it's a lot of work. Id rather my users just download a tarball from my website and build everything local.


I don't think that users expect developers to maintain packages for every distro. I had to compile ffmpeg lately for a debian installation and it went without an hitch. Yes, the average user is far away from compiling packages, but they're also far away from random distributions.


I think flatpak is closer to .app bundles. So, the argument is a little unfair.


Now imagine same but with AI killer bot swarms. Slaughterbots. Because we could !

As long as we have COMPETITION as the main principle for all tech development — between countries or corporations etc. — we will not be able to rein in global crises such as climate change, destruction of ecosystems, or killer AI.

We need “collaboration” and “cooperation” at the highest levels as an organizing principle, instead. Competition causes many huge negative externalities to the rest of the planet.


What we really need is some way to force competition to be sportsmanlike. EG: cooperating to compete, just like well adjusted competitors in a friendly tournament who actually care about refining their own skills and facing a challenge from others who feel the same way instead of cutting corners and throats to get ahead.

Cooperation with no competition subtracts all urgency because one must prioritize not rocking the boat and one never knows what negative consequences any decision one makes might prove to have. You need both forces to be present, but cooperation must also be the background/default touchstone with adversarial competition employed as a tool within that framework.


I don’t see any urgency in depleting ecosystems or building AI quickly or any other innovations besides ones to safeguard the environment, including animals.

Human society has developed far slower throughout all history and prehistory, and that was OK. We’ve solved child mortality and we are doing just fine. But 1/3 of arable farmland is now desertified, insect populations are plummeting etc.

Urgency is needed the other way — in increasing cooperation. As we did ONE TIME with the Montreal Protocol and almost eliminated CFCs worldwide to repair the hole in the ozone layer


I like this viewpoint of "cooperate to compete". It's what we've been doing on a global scale as ~all nations have agreed to property rights, international trade, and abiding by laws they've written down. And in fact some would say that at the largest business scale, there is this cooperation--witness the collusion between AAPL/GOOG/etc not to poach each others' employees. But there doesn't seem to be the same respect for "smaller" businesses, as they are viewed as prey instead of weaker hunters.


You're right, but it's not just tech development, it's pervasive throughout our civilization. And solving it requires solving it almost everywhere, at close to the same time.


I disagree. It’s all the frameworks and security features like telemetry of the operating systems and those framework libraries. There are programs written in Lazarus (free pascal) that run blazing fast on windows, even the modern ones like Windows 11. Keeping the software written for a specific purpose for the desktop is the best bet for quickness and stability.

Every modernization (hardware and framework) in software is a tax on the underlying software in its functional entirety


> path of least resistance

Great take. It feels like the path of least resistance peppered with obscene amounts of resume driven development.

Complexity in all the wrong places.


>Did we lose our way

It wasn't supposed to be like this but it looks like most people never have found the way by now.

So, misguided efforts, wasted resources, and technical debt piles up like never before, and at an even faster rate than efficiency of the software itself declines on the surface.


Moore's law is still going, but we stopped making software slower.

We use JITs and GPU acceleration and stuff in our mega frameworks, and maybe more importantly, we kind of maxed out the amount of crazy JS powered animations and features people actually want.

Well, except backdrop filter. That still slows everything down insanely whenever it feels like it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: