I want to make native apps but Apple and Microsoft seem to be trying really hard to stop me. I have to buy developer accounts, buy certificates for signing binaries, share 30% of my revenue with them for barely any reason and so on. Not to mention the mess they've introduced in their APIs - especially Microsoft. So of course we choose the much simpler, much cheaper way of the web.
The Apple Developer Program is only needed for macOS if you want to do sign your binaries or distribute through the Mac App Store. And you only have to pay Microsoft if you want to publish to the Microsoft Store (or use Visual Studio if you're a company that has more than 5 Visual Studio users, more than 250 computers, or more than $1 Million USD in annual revenue).
> buy certificates for signing binaries
Fair (though both Windows and macOS will run apps that haven't been signed, with more warnings of course).
> share 30% of my revenue with them for barely any reason
Only if you use their stores (Mac App Store or Microsoft Store), and it looks like the Microsoft Store won't take any cut if you do your own payments and it's not a game.
Yes, it definitely adds quite a bit of friction. Though my other points about not needing to pay for the Apple Developer Program unless you want to codesign (at a much lower price than what you pay for a codesigning certificate suitable for signing Windows programs) and not having to pay Apple 30% (or 15%, or anything) on macOS still stand.
Your solution solves a made up problem that nobody cares about, and doesn't solve the one that actually matters, which is to successfully make and distribute good software to users.
Someone shouldn't have to add the the fine print line "Assume I am talking about things that matter, instead of things that don't" to every statement or opinion that they have.
I've walked a couple hundred customers (American small business owners) through installing an unsigned MacOS application.There was plenty of friction for enough of them to cause us onboarding problems and for us to invest in doing it the Apple way.
A lot of it introduced from 2017 onwards and I think now it says something akin to "this application will hack your computer and is a virus" and you need to click the smaller hidden "ignore"s a few times to do what you want.
An actual customer won’t like it when you tell them they have to turn off or bypass a security feature to run your software.
Not when other software doesn’t need it.
How about "actual users" rather than "actual customers?" We should not normalize this because it eats away at free software. It is totally unreasonable to have to pay the operating system's manufacturer in order for person A to simply distribute software to person B, outside of manufacturer's infrastructure. The manufacturer has nothing to do with that distribution, and has no business "warning" the user about this software.
As much as I hate to submit to Apple having to Notarized my software, I have to admit that it’s a useful measure to detect and prevent malware. The end user is protected by Apple’s “Good Housekeeping” seal of approval.
Funny, I've never once in all my days installed malware from a Linux package manager, and this "seal of approval" doesn't cost me or the developer any money at all.
That’s because your computer is a hobby, and mine is a business. My customers use Windows and macOS. They have happily paid for my house, my car and my retirement. :o)
If you want to justify rent-seeking because it helps you pay for your lifestyle, come out and say so in the first place instead of pretending it's for the benefit of your users. But claiming that Linux is a "hobby" on HN is essentially trolling.
(almost) everyone has an SSL certificate for the web. An OS could check if software is signed with one. And maybe display a warning for only domain validation.
This is something that definitely chafes. Even in a large-company enterprise environment, so many worthy & legitimate projects never end up shipping due to financial or office-politics reasons. Putting up paywalls between devs and their work that they to spend both time and money on is bloody stupid.
kids will learn just about anything with the right motivation. adults who you are trying to get to pay you to use your software on the other hand...
well as someone who runs a few unsigned binaries myself. Its not hard if you know what to do but apple makes a big deal about how its "unsafe" and this freaks non tech people out.
I answer a support line for users at my institution installing an unsigned application and almost every MacOS support call is because the unsigned app option is only shown in a normally hidden system setting.
That statement is just not true. We don't sign our software and we never had that happen with any customer. It neither happened to any unsigned software on any of my own machines, in spite of running Defender on them.
Nah, much more common that "SmartScreen" will assume they're malware and throw up a big warning prompt (which the user will say "can't be bypassed" because they didn't click "More info").
Seriously, though, I've had the Windows Defender thing happen to freshly compiled binaries I made. The only way to prevent it from happening is to sign your binaries, or submit them individually to Microsoft using your Microsoft account for malware analysis.
It flagged the binary as being some sort of trojan (which name I looked up and found that it was a Windows Defender designation for "I don't know the provenance of this binary so I'm going to assume it's bad") and quarantined it.
It's often not just one button. It's a button, then opening the settings, manually navigating to the right section, clicking Open Anyway and then entering your password.
One of the reasons I moved to Javascript web development after many years as C/C++ dev, and after the hell of making iphone apps for Apples appstore - you dont have to get a licence, get approval or make an installer, if you ship a web 'app'.
I think on macOS it's kinda a requirement, even if you ship outside of the AppStore, to be trusted by consumers. Because I think the app needs to be signed by Apple, in order to start the app without a warning and I think in order for Apple to perform the signing, you'd need a developer account.
I might be wrong here as I have been focused pretty much only on mobile, so feel free to correct me.
True. Apple devices are a lost cause for me, I don't even consider supporting them in my software. It doesn't even come up as an option in my head, I forget it exists. I'd never willfully have anything to do with their ecosystem, whether desktop or mobile. I wonder if eventually people like me refusing to support them will actually make a difference and force them to change, or if enough people will just continue to bow down to them and do what it takes to be on their devices that they can just keep their horrible practices going.
I find it inexplicable when people respond to a particular problem with a suggestion on which large platform/ecosystem someone should use instead, or avoid.
Switching ecosystems is nowhere near that trivial.
Ecosystem choices are dependent on content and tool investments, other devices owned, product groups, integrated technologies, network effects between people, between companies, customer relationships, existing phone payments, existing ecosystem familiarity and skills, on and on.
As for developers, they often need to be on the top 2-3 platforms to be a serious choice for customers.
Nothing wrong with highlighting different pros and cons of different ecosystems.
But a suggestion to switch ecosystems, without a very deep understanding of someone's particular situation, just isn't helpful advice.
I'd go further and state that "ecosystems" are evil as they erode competition. It should be easy to change products independently of each other, e.g. I should be free to choose between Apple iCloud or Google Photos for storing my photo library. Instead I'm forced to experience what you already mentioned: integration preferences on different platforms, network effects and so on.
Only direct product properties should drive users' choices, everything else just raises the market entry barrier for potential competitors.
"Ecosystems" certainly are a real problem, although I think calling them "evil" is a bit far. What they are is a way for companies to create an artificial moat, and artificial moats are very bad things.
Yeah, the only actual hurdle from Apple is the measly 8 bucks a month for a developer account. I would happily pay ten times that amount just to avoid the node_modules dumpster fire
To be blunt, you do not have to create a developer account, sign binaries, or share 30% of your revenue with Microsoft. MS's API are not a mess in my opinion. You do have several options (traditional Win32, .NET, UWP, etc.). These options all work fairly well and are very flexible.
As for, Apple, I do not know but I suspect you can make Mac applications without a developer account. You need a developer account for iPhone. It's $99 a year the last time I looked. This is not a lot of money if you are serious about making an application.
If you don't sign your Windows installer, then the first N users to use it will get a scary pop-up message saying that the AV "protected your PC." I think you might also need do code-signing if you distribute through the MS store.
Compare with the web where LetsEncrypt just works without demanding a king's ransom.
As for the APIs, it is very easy to get into dependency hell between all the different UI technologies, .NET implementations, and target systems. Want to develop a brand new plain-old GUI app? Probably simple (although I've never tried, the web is right there). Need to develop a plugin for an existing application, or a new app for something like Hololens? Have fun.
It's a bit worse with windows. You can get a scary warning or you can get smartscreened to death and the app will be prevented from starting. This is random / depending on functionality and effectively impossible to test with 100% certainty.
How much are you going to pay stripe? 2.9% + 30¢ ... that means you have to charge 10 bucks to get down to a 6% transaction fee. Quite the price floor and an interesting cap on your pricing model!
What does managing chargebacks cost you? The moment your taking money your going to hire in customer service, or spend time dealing with CS. What happens when you get a chargeback, or do a refund? Most of the time you loose money (processing fees etc)
If your under a million bucks a year apple is 15%. If you're building a low price app or a value add app, odds are that apple is going to be a far better deal for you than doing it on your own.
$10 = 6% fee; $5 = 8% fee. Both of which are far better than apple’s fees, so that point is a bit confusing.
Chargebacks = customer support. I agree with that, but if you have a B2C business which has any non-trivial revenue (OP is talking about word doc apps, so we’re obviously not talking about indie $2 side project apps), then you would already have CS anyway. I fully understand there is an opportunity cost with any service and where those costs get realized, but your examples don’t seem like a slam dunk in apple’s favor.
Would you? Because I would argue that CC processing is the point where you NEED near real time CS. Before that handling customer issues can be done better through forums, and you're going to get a lot of self service support from those.
>> (OP is talking about word doc apps, so we’re obviously not talking about indie $2 side project apps)
Your competing with free, libra office, Zoho writer (shockingly popular)... I would not know how to price the product to compete... 2 bucks a month as a trial? Would I pay 10 bucks a year if you were great? IF you got said productivity app past 100k users, getting to a million isnt a stretch (you have velocity and popularity).
Unless your doing something really slimy, your going to be able to get a better rate out of apple if you ask your rep.
Even when using Stripe (which is a premium payment service that's more expensive than most options) you'd be better off than the 15% from Apple as long as you sell for more than $2.5. And that's not even counting the up from cost that come with Apple (subscription + the need to buy a Mac).
How is chargeback being managed on Apple? I doubt they are swallowing the cost on their side, so I don't really see the difference between what'd get with a bank: you're losing the money anyway.
At 5 bucks a customer, you need 200k new ones a year to break a million bucks.
TO break even with apple you have about 80k a year all in cost to deal with all your refunds and charge backs.... after taxes, insurance and overhead that's 40-60k take home for a CS agent.
What is the charge back rate on digital goods? Im going to tell you that if your a small player it will be WAY higher than apple. Apple will cut a consumer off if they have a high refund rate, your CS agent will have no such insight.
%5-10 of your charges will just turn into refunds. Is that a process where you're killing license keys? Oh did you forget you now have infrastructure to run to issue and maintain said key? What is that going to cost you? Dont want to run like that... well ok then expect your return rate to go even higher. That discount CC processor is going to look at your refund and charge back rate and jack your fees up sky high (because that's the name of the game).
Once you get past a million bucks the open question is "do I do enough business to negotiate with apple". IN the case of a dry business oriented app, that has enough popularity to make that much, you might see apple willing to negotiate with you much sooner than a game dev who has sneaky buy options and huge charge back rates.
> At 5 bucks a customer, you need 200k new ones a year to break a million bucks.
But at $5 per user Apple is already much more expensive below the million threshold. It gets worse after a million, but it's already costing you tens of thousands before that. And again, you are comparing with one of the most expensive option on the market!
> after taxes, insurance and overhead that's 40-60k take home for a CS agent
Which, almost anywhere in the world, is more than you need to hire someone full time to work on your customer support! And no, what Apple provides is definitely not superior to a full time consumer support person.
The “value” that you pay for when dealing with Apple is access to their walled-off user base.
> the open question is "do I do enough business to negotiate with apple
This isn't an “open question”, it's a closed one: Apple isn't going to talk to you unless they think not giving you special treatment would get them antitrust issues. In your case or mine, it's not gonna happen.
Does Apple charge 15% for each dollar up to a million plus 30% for each dollar above a million, or when you cross a million (in a year), do they suddenly jump to 30% of everything? IOW, if I have earned $999,999 so far this year, I have to pay Apple about $150,000. If I then make another $1 sale, do I owe a few cents more or another $150,000?
And once your rate goes to 30%, does it stay there the following year, or does the whole system reset to zero each year?
15 percent on the first million in a year 30% for everything after.
Subscriptions are 15% for renewals (and maybe for all subs).
If your pulling in more than a few million a year from apple, and your not "gaming" or gaming the system I hear they are fairly open to negotiate. YMMV
How do you calculate a price for not being able to release your main product? Usually without clear indications of what exact interpretation of a rule you are breaking...
We've had delays of a week because of things like we mentioned "Android" in an integration setting that had been there for years.
whenever I do native (native as in, compiled without going through some bytecode / VM / interpreter ...) apps for mac / windows / linux I don't have to do any of this, I just use Qt
You can static-link in all of Qt. Just build Qt yourself. It can strip out all the things you don't need, even symbols from the libraries you do use, so your binary isn't going to be that big.
You can statically link Qt in compliance with the LGPL. The LGPL only requires that users are able to substitute the LGPL'd portion of an application with an alternative and compatible implementation.
Using a shared object/DLL is the traditional way of doing so, but you can also accomplish this by providing the object files for your application to allow users to link their own substitutions statically.
The FSF explicitly permits this as documented here:
You just have to open your source, that part which depends on Qt. It's not a real problem. But get a commercial license anyway, the cost is small compared to the other costs of developing your program, and you want to be friends with them.
(There's someone on HN who lives on a single-line modification of an open source program. Trust me, source availability of the source code of your client app won't really make a difference.)
He's a nice guy. If you want your company to buy his product, you send your boss a link to the product's home page (which doesn't say "open source") and tell your boss that this product is great. Your boss looks at the pricing and description, and says ok.
I do as well. I program everything in C++ with Qt 6 (commercial license), compile statically where convenient, and use a single code base for all platforms (mobile, desktop, web). I handle the responsiveness of interfaces, DPI, and other micro-adjustments directly in a simple QML template.
They ultimately need money, not apps or platforms, so this is exactly how they achieve that ultimate benefit, no top-level logic will just justify free here
Not true. The technical term for "ultimately need money" is discounted future cash flow. It is impossible to know for sure what price you have to charge for any particular item at any given time in order to optimise for this metric.
Realistically, the answer depends on the state of competition between platforms. We all know what that state is.
If "top level logic" is supposed to mean "analytic statements" then you are right. The optimal price cannot be derived analytically.
As this is such a pointlessly contrived interpretation of the term "logic" in this context, I chose to use a different one: Is there a set of empirical circumstances under which an optimisation algorithm could conclude that the optimal price is zero? The answer to that is clearly yes.
Top level logic is supposed to mean the logic the comment I'm responding to uses to justify free. You know, "in this context". And I'm not talking about "optimal", just a single price of 0
Now, what exactly is the point of you insisting on your wrong interpretation?
you're right, no top level logic would help you settle on any specific price, hence you need to engage with actual reality instead of simply noticing one party that benefits while ignoring the other that also "ultimately benefits" and "needs the platform to run apps on".
You, of course, want to be their feudal lord and get access to all their customers by right while also requiring them to pay a tithing of their hardware sales to you since you advance "their ultimate benefit" (they wouldn't sell any hardware without software)
> You, of course, want to be their feudal lord and get access to all their customers by right
If someone buys an iPhone, Apple does not have the right to interpose themselves between that person and what they want to do with the iphone they bought. They have no right to a cut of the sales any more than the power company that provided the electrons to charge the battery.
What I want is for Apple to get out of the way.
> while also requiring them to pay a tithing of their hardware sales to you since you advance
What I want them to do, for a start, is to the same thing on ios that they already do on macos. I can already write a piece of software an sell it without forking 30% over to Apple.
The current situation where they feel entitled to a cut of every software sale that happens on ios, and veto power over it, is a wet dream that even Microsoft in the 90s wouldn't have thought they'd get away with.
Yeah that doesn't quite work. I agree that the cost of tooling has gone to nearly zero in most cases, but not giving it away will limit the people willing to even try to develop code for your platform.
Microsoft charged ~$1000 a seat for Visual Studio and at the same time they had an effective monopoly eventually leading to United States v. Microsoft Corp.
But for things like Apple notarization, you don't get the choice of not using the tooling. Besides, that transition already happened with the popularization of Free/Open software, somewhere in the early 2000s.
The problem with this argument is that the tools for proprietary platforms are inferior to the cross-platform ones in many cases. VSCode is better than XCode or Android Studio. GCC and Clang are better than MSVC. We don't need platform lock-in to subsidize good tools because the best tools are unencumbered.
I'd happily build iOS apps without XCode or any of Apple's frameworks to save the 30% fee. Heck, I'd do it even if I still had to pay the 30%, I hate being forced to use XCode.
In the early 90s, you could expect to pay anywhere from $200–1000 for a good C/C++ compiler. Now it’s free. The 30% tax, as many people have already pointed out, is only if you want to sell through the store. Back in the 90s, if you were selling software, downloading off the net wasn’t really a thing yet and you could easily expect to end up giving up 40–60% of the retail sales price and out of what was left you were paying for manufacturing of product so you’d maybe get 20–40% of the retail sales price.
Which leaves the certificate thing and while it’s an annoyance, it’s also nice as a software user to know that a program I’m running is the program it claims to be without much friction on my part, and the cost can’t be that prohibitive since I don’t remember the last time I ended up with an unsigned binary on my Mac, even for free software like TeX and friends or Aquamacs.
> and the cost can’t be that prohibitive since I don’t remember the last time I ended up with an unsigned binary on my Mac, even for free software like TeX and friends or Aquamacs.
Ok, so your app tastes aren't that varied then (or maybe it's the memory), plenty of devs of various little utilities don't bother paying
ICT departments in many large companies often force dev teams to use certain tools, because it's what's on their list of 'approved tools for devs'. Getting new tools on this list is often stonewalled for usually office-politics reasons.
Sometimes devs are locked into the tools they use. This situation is shit, but not uncommon.
On which platform? On Apple's there cost is part of the premium you pay for the device. Cheapest Mac is $599. Cheapest windows machine is $199? $99. So arguably some of tbat $400-500 is for the extra software. Or would compare against Linux where you could also get a machine for $25
"Much cheaper" is still very relative. I got a second hand 8GB M1 MBP last year for $900, as is the standard price in my region. The cheapest M2 Air brand new retails for ~$1200. Meanwhile I've just ordered a new non-mac machine with up to 5GHz boost and 32GB RAM for a whopping... $1000, including extended warranty and delivery fee.
I don't think you "get rid of" UAC, you just put the author's name on the screen instead of unknown publisher. (And why do you need elevated privilege? Most applications don't) unless you are referring to "smart screen" which is a very different thing, although quite similar from a user's perspective.
> share 30% of my revenue with them for barely any reason
Does the App Store collect sales tax and remit on your behalf? If it does then I think it's worth it or face registering both in the EU and UK ($0 tax threshold) as well as 50 US states (once you hit the allowed limit) will take you a long time.
And you'd thinking would reverse again to the common sense baseline when you realize that alternative providers outside of locked systems don't charge 30%
Can you name a provider? I personally use Stripe Tax for my business and while they will calculate the taxes you owe in each municipality it is totally on you to create an account with each country/state's Department of Revenue and fill out a form quarterly to submit your payments.
This paperwork is what I believe a marketplace like the App Store or Amazon do for you under their own entity that you have to do yourself if you bypass their stores.
You could consider using Galvix, which will connect with your billing system (including Stripe), fetch all the invoices and automatically prepare and file sales tax returns across each of the US states where you are legally required to file, all while providing you full control and visibility over the process. Galvix charges a flat fee of $75 per filed return, which can be considerably more affordable compared to paying a revenue share to other platforms (if tax compliance is the only reason you want to use these platforms).
What's wrong with your Amazon example? They don't charge as much (and that's partially because, while they're big and dominant, they're still not as big/dominant in that market)
If you live in the US the only entity you need to collect sales tax for is the state you live in. Despite what they may say you are under no legal obligation to collect sales tax for the other 49 states, nor the EU or UK.
I'm pretty sure that was changed by South Dakota v. Wayfair[0]. Most states seem to only require you collect the tax if you have 200 shipments into the state or $100k in revenue because going after a small time out of state e-commerce business over a few dollars of tax probably wouldn't be worth it but a large firm in Delaware refusing to collect tax on shipments into California would probably be hearing from California's government.
If you're shipping overseas, you can probably ignore foreign taxes if you don't have a business nexus there. Especially if you have no desire to ever visit those countries. Basically just leave it up to your customers to pay whatever tax they owe.
> Every company selling goods and services to European customers needs to collect value-added tax (VAT), even if their business is not established in Europe.
> Enforcement of judgments issued by foreign courts in the United States is governed by the laws of the states. Enforcement cannot be accomplished by means of letters rogatory in the United States. Under U.S. law, an individual seeking to enforce a foreign judgment, decree or order in this country must file suit before a competent court. The court will determine whether to recognize and enforce the foreign judgment.
Obviously, its not a good idea to bet your business on the courts not enforcing an EU fine when you can just add the VAT and cost of the handling hassle to the price for EU customers.
The operating idea from governments is that in the digital age, when you sell something to a customer abroad, you're selling to them on their turf and not yours. That's why you're considered liable for sales tax in the first place. Doesn't matter that your own country of residence may or may not care. For all intents and purposes it's as if you physically flew to the country and hand-delivered your software/product to your customer.
It's clearly an awful "patch" to outdated concepts on how commerce works compared to pre-internet, but it's what we have right now.
I work at a place that ships an app to both Apple and Microsoft Desktops (we could even do Linux is there was ever any demand for it). We use this old thing called Java which still seems to work. I don't develop it though so I guess I don't have to worry about too much of my resume getting caught up with unfashionable languages (let's face the facts about what most tech these days is trying to advance - promotions - not the state of the art).
The only Java desktop app I've ever used (on any platform) without frustration was Slay the Spire, and it only passes because it's a game and doesn't require desktop integration of any kind.
Slay the spire is built using libGDX which provides a lot of cross platform support on top of Java. For platforms like Switch without JVM support it probably ships a compiled version without JIT.
The native world also refuses to create a standard UI API, making everyone use either Qt or Electron because sorry writing it over again for each platform is a hard “no.” Not even big companies do that anymore.
Yes. Not only are they refusing (and have been for decades) to create a standard UI API, they are 1. actively making their own UI APIs as different as possible from one another, even down to requiring different programming languages to use them, and 2. killing things that they once supported, which ease cross-platform code (both major platforms walking away from OpenGL in favor of their incompatible native APIs).
Not only that. There are people who go to great lengths to make sure that native apps don't work properly across desktop environments even on the same OS. They also call out anyone who dares to complain about it.
Why would platform maintainers want to encourage the lowest common denominator apps that such an API would undoubtedly result in (as a standardized UI API by definition cannot leverage platform strengths)?
Apps like that get made anyway but as it stands at least there’s a healthy crop of smaller/indie native alternatives which often best the behemoths in UI/UX. That would likely disappear with the addition of a standardized UI API, as it would probably also come with the abandonment of the old specialized APIs.
Qt licensing is its own mess.
For commercial software, the pricing is 350-500$ per developer, per month. Seriously [1]. The company that now owns the framework doesn't seem to acknowledge the gap between big enterprises and solo developers/smaller teams.
[1] Yes, one can use Qt for commercial software without buying a license (as long as it is dynamically linked), but their marketing does everything it can to hide that fact. Also, the newer additions to Qt do not fall in this category – for those, you have to pay.
- Go LGPL. Sure, you will need to ship binaries and libs, but there are tools within the SDK that do this automatically for you (windeployqt, macdeployqt, etc.). And as others have stated, it is a problem that was solved years ago.
- Go Commercial to link statically. If you are a single developer, there is an annual license available for $499 (up to $100k yearly revenue).
It always shocks me developers complain so much about QT licensing. For any other business, an expense that small for so much value seems trivial. Without a decent UI software is a terrible for experience for most users.
The fee is for selling someone else's software. I personally despise capitalism, but your complaint about it is among the least convincing ones I have ever heard.
That is 4,200-6,000 $/yr. In the US, a junior developer in a software company costs (all-inclusive, not just salary) around 150,000-200,000 $/yr. That is 2-4% of yearly cost on tooling. That is not very much.
It might not be worth the price, but that is hardly ridiculous. It is quite believable to get a 4% productivity improvement from appropriate tooling. You need to do a cost-benefit analysis to determine the answer to that question.
Lol scrolling on qt is worse than on the web. I mean, you can use normal scrolling super easily on both (you don't have to do anything, and it just works). But truly custom scrolling is much harder on qt than web. In a way that's a good thing, but again, the default is just as easy on the web as it is on QT. Plus you don't have to deal with the qtquick/qtwidgets/etc thing and the non open source parts of qt
That's why I said it might be a good thing. My main point was that it's just as easy on the web for standard scrolling. But even if you don't want standard scroll behavior, it's still easier. There's nothing easier to do on qt than on web. Compare a qtgraphicsview or qt3dcanvas to a webgl canvas and again, it's fighting against the framework versus stuff just working. Now sure qt is much better for tons of other stuff, but I just found it weird that the comment I was replying to mentioned wasting time on customizing stuff as being the downside of web apps, as if it's not a much more difficult task to do in qt.
You remind me when microsoft was claiming that bash was hard and as example did some crazy obfuscated bash scripts, rather than just doing them the sane way.
If you're doing a GUI, you have no reason to be doing canvas manually.
What? Even with QT you often have to use a painter and draw what you want more or less. You also need a canvas to display anything that is visualisation related. In any case it doesn't matter, as I said, scrolling is just as easy on the web as it is on QT. my point was more general, if you want to do anything custom it's easier to do in JS than with QT. Even using the multiple tools QT offers to customize the view (the painters, canvases, 3d widgets, etc)
You're just showing me you've never dune as much as an hello world using QT. Which is completely fine, but don't paint yourself as knowing what you're talking about.
You will still be in binary sign hell and Windows Defender may wake up one day and decide your app is a virus "when it does X", which is exactly it's business case. Complaining to MS will do nothing since their online thing will check and not find anything. Boom, entire software business gone for reason out of control. Doesn't care about your signed certificate too.
You don’t have to do any of that for a native Mac app. Signing it is a good idea but not required and you can distribute it from your own website or even from GitHub/Lab where you can tell people it’s not notarized and they’ll need to command click and open it the first time.
In my opinion, this will become harder and harder to do with every release of Windows and MacOS. I wouldn't count on the average customer of these vendors being willing to shop outside of their plaatform's app stores forever.
This doesn't matter. Notarization doesn't do anything against a dedicated attacker willing to commit illegal acts.
Notarization is supposed to deter malware by a combination of static/dynamic analysis and attaching some real-world legal entity to any signed binary so law enforcement can follow up on if malicious activity is happening.
Analysis is not bulletproof and can be worked around.
The legal entity requirement is also trivial to nullify. At least in the UK, the company registration authority charges a nominal fee (payable by credit card - stolen if necessary) and puts you on the company register. Dun & Bradstreet scrapes that and that's how you get the DUNS number necessary to register for an Apple dev account. All of this is trivial to get through if you don't mind breaking the law and making up a few fake documents and providing a stolen CC (and assuming you're already planning to break the law by distributing malware, this is not a problem).
Finally, even if the "legal entity" bit was bulletproof, law enforcement just doesn't give a shit about the vast majority of online crime anyway.
All of these requirements are just a way to lock down access to the walled garden and put as many roadblocks to laymen trying to make their own software (in favor of big corps) masquerading as security theatre.
Notarization does do things against attackers, yes.
Firstly, stolen CCs tend to get reported especially if you make a big purchase. If you use a stolen CC to buy a developer certificate then it's going to get revoked the moment the real owner notices, and then your apps will be killed remotely by Apple before they've even been detected as malicious.
Still, the big win of notarization is that Apple can track down variants of your malware once it's identified and take them all out simultaneously. They keep copies of every program running on a Mac, so they can do clustering analysis server side. On Windows there's no equivalent of notarization, but the same task is necessary because otherwise malware authors can just spin endless minor variants that escape hash based detection, so virus scanners have to try and heuristically identify variants client side. This is not only a horrific resource burn but also requires the signatures to be pushed out to the clients where malware authors can observe them and immediately figure out how they're being spotted. Notarization is a far more effective approach. It's like the shift from Thunderbird doing spam filtering all on its own using hard-coded rules, to Gmail style server side spam filtering.
> All of these requirements are just a way to lock down access to the walled garden
I've been hearing this for over a decade now. In the beginning I believed it, but it's been a long time and Apple have never made macOS a walled garden like iOS is. There's no sign they're going to do it either. After all, at least some people have to be able to write new apps!
Section 5.3: "By uploading Your Application to Apple for this digital notary service, You agree that Apple may perform such security checks on Your Application for purposes of detecting malware or other harmful or suspicious code or components, and You agree that Apple may retain and use Your Application for subsequent security checks for the same purposes."
Just stop making apps for Apple, Microsoft, Google platforms. Truth is everything except Linux is just somebody else's digital fiefdom where we developers are but serfs and the users are even lower. It's either Linux or the web.
> we choose the much simpler, much cheaper way of the web.
Once the beancounters at the rent-seeking companies (Apple, Microsoft, …) have figured out that web development is where all the money is, this will change rapidly. Google has already started gatekeeping the web via Chrome.
It's either me having reading comprehension issue, or it's surprisingly unclear which certificate I need to buy to publish an app on Microsoft Store and what the minimum cost is.
Considering the whole point to have Windows is to use apps I'd expect they made the process super smooth.
Unfortunately, yes, having one's personal information accessible to large, private companies really doesn't matter to most people. The only people I know who really care about this stuff are tech people, stalking victims, and victims of domestic abuse. [Admittedly this is becoming more aware for women trying to get abortions, but they're also a minority shamed to silence most of the time.] This isn't going to change until there are real, public, personal stakes for the majority of people.
The whole thing is like an intentional vicious circle. People buy the systems because certain applications are available on them (or rather because that's what everyone does), and the application manufacturers support the systems where the most customers are expected. But if one takes an impartial look at which applications or functions are really needed for a company, there are certainly alternatives.
Unfortunately, the open source community sabotages itself, e.g. by constantly changing the ABI of essential functions and thus undermining the portability of non-open source applications (see e.g. https://news.ycombinator.com/item?id=32471624).
I find it very regrettable that now also on HN the flagging function is being misused more and more often to suppress other, but completely legitimate views. It is obvious that the majority of people are unaware of this problem or marginalize it, but that does not make it any less critical.
My statement was: Apparently, people prefer to buy expensive devices that eavesdrop and patronize them. As long as this continues and people don't run away from these manufacturers, they will continue with the trend and patronize people even more.
The hardware is difficult but people are working on it. If you really want all firmware to be open old Thinkpads are popular but I've never tried it myself. And Linux/*BSD should be your OS. I've been using Linux for over a decade and don't miss anything.
If your work mandates something you can't solve with Linux the issue is with your work and you should push to change that.
I think your comment would be better if it would at least humour the idea that people might have legitimate reasons for their preferences, even if they don’t match yours.
Such a state would not only be very unsocial (just think of the many elderly and disabled people, apart from the less well-off, who are unable to use such small screens and operating elements), but would also have to accept the question of why it is so interested in forcing such a device on every citizen.
This seems very unlikely to me, as Sweden is known to be one of the most social countries in Europe, and such a requirement would not only discriminate against the less well-off, but also against the elderly and disabled. It would be very surprising if a majority could be found for such a regulation in Sweden.
Sweden has one of the highest wealth disparity in europe, and it's increasing.
Also, as person living in sweden, let me tell you that marketing as inclusive is not the same as being inclusive. Spending money to let disabled people be able to get on trains, or checking that accessibility laws are respected (they aren't) are not things that happen in sweden.
Just last month I encountered a broken elevator at a train station. Which means no taking the train if you're on a wheelchair (and good luck with getting a refund). Even worse, if you actually were on the train, you're now stuck on the platform and can't leave until the next train shows up. Of course to buy the ticket for the next train you will need a smartphone.
"Barely any reason"... except they created and maintain the entire plarform and tooling that you're building on. And in Apple's case they give it away for free with any mac.
I'm old enough to remember when buying development tooling for DOS or Windows was $$$$$$
Apple started giving away the development environment because they had such an anemic software ecosystem. They had a handful of OpenSTEP developers and a larger crowd of die-hard Mac people, the successful ones mostly moving away from the platform.
Today Apple is taking percentage of every dollar made from application developers who participate in their App store and they are making it increasingly difficult to avoid this with every release. IMHO, they are making far more dollars today than they ever did selling development hardware and SDK licenses.
Both of these are completely false. Testflight distribution without a developer license is impossible. Asking users to compile the app themself is infeasible, as the XCode toolchain is upwards of 18gb and they will be required to compile it once every week to keep it on their device. The developer fee is unavoidable — even with EU intervention
The ads are the real content from Medium’s perspective. The article is actually the medium by which the real content is delivered, like a train carrying dark passengers. The article is not what Medium cares about delivering to your browser, but the ads. And delivering the ads requires a lot of complexity.
The article is an ad: "*** provides uptime monitoring and flow-based monitoring for APIs."
This is an important subject, thus it's one for which clickbait is generated.
Size is a problem. I look at my Rust compiles scroll by, and wonder "why is that in there?". I managed to get tokio out, which took some effort. The whole "zbus" system was pulled in because the program asks if the user is in "dark mode". That brought in the "event-listener" system.
Lately, "bash" in a Linux console has become much slower about echoing characters. Did someone stick in spell check, or a LLM for autocomplete, or something?
I'm not sure if it's related, but I have the git branch in my PS1 and I've noticed that it's much slower to show a new prompt when inside very large repositories now, and I don't think that was the case previously.
Firefox about:process reports the article taking 239 MB of memory and 0.06-0.2% of my CPU ten minutes after it finished loading - 45% of the CPU time seems to be spent in Google's reCAPTCHA.
I wish Mozilla or Google or someone aggregated statistics for cpu/memory/energy usage by domain to shame devs who clearly don't otherwise care.
And browsers are larger that some operating systems. And talk about a closed off ecosystem ... WASM is still crippled and JS/HTML/CSS is your only real viable option for web development.
The web feels like 2005 again. Only thing is, this time the popups are embedded in the page...
I think I would prefer 2005 web again. I'd probably be able to see more of the internet. I use heavy DNS filtering, no javascript on untrusted sites, no cookies, no fonts, VPN and so on. With cloudflare blocking me I basically can't see the majority of websites.
For that I fire up a Gemini browser against gemini://gemi.dev/bin/waffle.cgi and paste the URL.
For non Gemini network users, just change medium.com to scribe.rip at the URL.
My opinion about this is that yes, we lost our way, and the reason is very simple, it is because we could. It was the path of least resistance, so we took it.
Software has been freeriding on hardware improvements for a few decades, especially on web and desktop apps.
Moore's law has been a blessing and a curse.
The software you use today was written by people who learned their craft while this free-ride was still fully ongoing.
The thing that makes me crazy is that the thing that we do on computers are basically the same each year, yet software are more and more heavy. For example just in 2010 a Linux distribution with a DE just started did consume 100Mb of RAM, an optimized version 60Mb of RAM. I remember it perfectly. I had 2Gb of RAM and did not have even a swap partition.
Now just a decade later, a computer with less than 8Gb of RAM is unusable. A computer with 8Gb of RAM is barely usable. Each new software uses Electron and consumes roughly 1Gb of RAM minimum! Browsers consume a ton of RAM, basically everything consumes an absurd amount of memory.
Not talking about Windows, I don't even know how people can use it. Every time I help my mother with the computer is so slow, and we talk about a recent PC with an i5 and 8Gb of RAM. It takes ages to startup, software takes ages to launch, it takes 1 hour if you need to do updates. How can people use these system and not complain? I would throw my computer out of the window if it takes more than a minute to boot up, even Windows 98 was faster!
Think also about all the finished stand-alone applications which have been discarded because of replacement APIs, or because they were written in assembly. We had near-perfect (limited feature-wise from a 3-decade view, of course) word processors, spreadsheets, and single-user databases in the late 80s/early 90s which were, except for many specific use-case additions, complete & only in need of regular maintenance & quality-of-life updates were there a way to keep them current. They were in many cases far better quality & documented than almost any similar applications you can get your hands on today; so many work-years done in parallel, repeated, & lost. If there wouldn't be software sourcing & document interchange issues, it would be tempting to do all my actual office-style work on a virtual mid-90s system & move things over to the host system when printing or sending data.
Addition: consider also how few resources these applications used, & how they, if they were able to run natively on contemporary systems, would have minuscule system demands compared to their present equivalents with only somewhat less capability.
Outside gaming, ai and big data, aka things for instance my parents don’t use at all, what limited feature wise? Browsers, sure, however my father prefers Teletext and newsgroups and Viditel (doesn’t exist anymore but he mentions it quite a lot) over ad infested slow as pudding websites. Email didn’t change since the 90s, word processors changed but not with stuff most people use (I still miss WP; it was just better imho; I went over to Latex because I find Word a complete horror show and that didn’t change), spreadsheets are used by pros and amateurs alike as a database mostly for making lists; nothing new there. You can go on and on; put an average user behind a 80s/90s pc (arguably after win95 release; DOS was an issue for many and 3.1 was horrible; or Mac OS) and they will barely notice the difference. Except for the above list of ai, big data, gaming and most importantly, browsers. Ai is mostly an api so that can be fixed (I saw a c64 openai chat somewhere) , big data is a very small % of humanity using that and gaming, well, depends what you like. I personally hate 3d games; I like 80s shmups and most people who game are on mobile playing cwazy diamonds or whatnot which I can implement on an msx 8 bit machine from the early 80s. Of course the massive multiplayer open world 3d stuff doesn’t work.
Anyway; as I said before here responding to what software/hardware to use for their parents; whenever someone asks me to revive their computer, I install Debian with i3 wm and dillo and ff as browser, Libreoffice and thunderbird. It takes a few hours to get used to but people (who are not in IT or any other computer fahig job) are flabbergasted by the speed, low latency and battery life. I did an x220 (with 9 cell) install last week; from win xp to the above setup; battery life jumped from 3 to 12 hours and everything is fast.
I install about 50 of those for people in my town throughout the year; people think they depend on certain software, but they really usually don’t. If they do, most things people ask for now work quite well under Wine. I have a simple script which starts an easy ‘Home Screen’ on i3 with massive buttons of their favourite apps which open on another screen (1 full screen per screen); people keep asking why Microsoft doesn’t do that instead of those annoying windows…
Windows 98 was often running on fragmented disks. I recall it taking minutes before I could do useful work. And having multiple apps open at once was more rare. While possible it often ended in crashes or unusable slowness.
Experienced same, it was faster to not multitask, do one thing a time. You would think launching 2 tasks would take 2x time with same resources, but it felt more like 3-4x. Disk was 1GB back then. I blame it on disk seek times and less advanced IO scheduling.
> The thing that makes me crazy is that the thing that we do on computers are basically the same each year
I think that is some kind of fallacy. We are doing the same things but the quality of those things is vastly different. I collect vintage computers and I think you'd be surprised how limited we were while doing the same things. I wouldn't want to go back.
Although I will say your experience with Windows is different than mine. On all my machines, regardless of specs, start up is fast so the point where I don't even think about it.
I have a Macintosh Plus, SE, 7200, and iMac G3 (System 6, 6, 7, 9) I've been using for fun lately after fixing many of them up. Even with real SCSI harddrives in the SE, 7200, and iMac, they're such a joy to use compared to a modern OS. Often much more responsive, UI is always more consistent, not to mention better aesthetics. They really don't make software like they used to. A web browser or OS should not be slow on any modern hardware but here we are.
System 7 runs so fast in BasiliskII on an old Atom netbook. I recently saw a video showing System 6 running in an emulator on an ESP32 microcontroller on an expansion card in an Apple II. It was substantially faster than the Mac Plus it was emulating. It really takes seeing this kind of thing to understand the magnitude of the problem.
My daily runner is a T400 Laptop with 4GB RAM on a fairly slim Linux distro. But in the last 6-12 months it is starting to feel a little tight when it comes to anything web browsing. Even things like Thunderbird are getting very bulky in keeping up with web rendering standards.
I pulled down an Audiobook player the other day, once all dependencies were meet, it need 1.3GB to function! At least VLC is still slim.
Not discounting your lament about memory use, this caught my eye:
> I would throw my computer out of the window if it takes more than a minute to boot up, even Windows 98 was faster!
Sure, Windows has grown a lot in size (as have other OSes). But startup is typically bounded by disk random access, not compute power or memory (granted, I don't use Windows, if 8GB is not enough to boot the OS then things are much worse than I thought). Have you tried putting an SSD in that thing?
(And yes, I realise the irony of saying "just buy more expensive hardware". But SSDs are actually really cheap these days.)
This whole thread needs a huge amount of salt and some empirical examples. I think if you compared side-by-side it’d be different. I remember my upgrade from 2019 MacBook to M1, when every single task felt about 50% faster. Or from swapping a window laptop’s HDD with an SSD. (Absolutely massive performance improvement!) Waiting forever for older windows computers to boot, update, index or search files, install software, launch programs, etc. Waiting ages for an older iMac to render an iMovie timeline.
Others in the thread talking about the heyday of older spreadsheet and document programs that were just as fast. So? I bet you could write a book on the new features and more advanced tools that MS Excel offers today compared to 1995.
We went from things taking minutes to taking seconds. So you could improve things by 50% and that could be VERY noticeable. (1min to 30s, for example.) If your app already launches in 500ms, 250ms is not going to make your laptop feel 2x faster even if it is. On top of that, since speed has been good enough for general computing for several years now, new laptops focus more on energy efficiency. I bet that new laptop has meaningfully better battery and thermal performance!
How advanced is excel now comparing with 2016 version?
New expensive laptop had the same "fast" feeling which fade with new iterations of software. Browser takes insane amount of CPU and memory but isn't faster.
Maybe some intense CPU tasks like zipping folder is faster then ever, but I'm not zipping all day. However Slack is behaving like there is server side remote rendering for each screen...
If you keep your software up to date, every hardware upgrade will feel like a significant improvement. But you're comparing the end of one hardware cycle to the beginning of the next. You regain by upgrading what you previously lost to gradual bloat.
Most of my windows PC's boot time happens before my computer even starts loading the OS. If I enabled fast boot in my bios, I'm pretty sure my PC would boot in around 15 seconds.
> It was the path of least resistance, so we took it.
Well said. I believe many of the "hard" issues in software were not "solved" but worked around. IMO containers are a perfect example. Polyglot application distribution was not solved, it was bypassed with container engines. There are tools to work AROUND this issue, I ship build scrips that install compilers and tools on user's machines if they want but that can't be tested well, so containers it is. Redbean and Cosmopolitan libc are the closest I have seen to "solving" this issue
It's also a matter of competition, if I want users to deploy my apps easily and reliably, container it is. Then boom there goes 100mb+ of disk space plus the container engine.
It's very platform specific. MacOS has had "containers" since switching to NeXTStep with OS X in 2001. An .app bundle is essentially a container from the software distribution PoV. Windows was late to the party but they have it now with the MSIX system.
It's really only Linux where you have to ship a complete copy of the OS (sans kernel) to even reliably boot up a web server. A lot of that is due to coordination problems. Linux is UNIX with extra bits, and UNIX wasn't really designed with software distribution in mind, so it's never moved beyond that legacy. A Docker-style container is a natural approach in such an environment.
Is it? I'm using LXC containers, but that mostly because I don't want to run VMs on my devices (not enough cores). I've noted down the steps to configure them if I ever have to redo it so I can write a shell script. I don't see the coordination problem if you choose one distro as your base and then provision them with shell scripts or ansible. Shipping a container instead of a build is the same as building desktop apps instead of electrons, optimizing for developer time instead of user resources.
Yes obviously if you control the whole stack then you don't really need containers. If you're distributing software that is intended to run on Linux and not RHEL/Ubuntu/whatever then you can't rely on the userspace or packaging formats, so that's when people go to containers.
And of course if part of your infrastructure is on containers, then there's value in consistency, so people go all the way. It introduces a lot of other problems but you can see why it happens.
Back in around 2005 I wasted a few years of my youth trying to get the Linux community on-board with multi-distro thinking and unified software installation formats. It was called autopackage and developers liked it. It wasn't the same as Docker, it did focus on trying to reuse dependencies from the base system because static linking was badly supported and the kernel didn't have the necessary features to do containers properly back then. Distro makers hated it though, and back then the Linux community was way more ideological than it is today. Most desktops ran Windows, MacOS was a weird upstart thing with a nice GUI that nobody used and nobody was going to use, most servers ran big iron UNIX still. The community was mostly made up of true believers who had convinced themselves (wrongly) that the way the Linux distro landscape had evolved was a competitive advantage and would lead to inevitable victory for GNU style freedom. I tried to convince them that nobody wanted to target Debian or Red Hat, they wanted to target Linux, but people just told me static linking was evil, Linux was just a kernel and I was an idiot.
Yeah, well, funny how that worked out. Now most software ships upstream, targets Linux-the-kernel and just ships a whole "statically linked" app-specific distro with itself. And nobody really cares anymore. The community became dominated by people who don't care about Linux, it's just a substrate and they just want their stuff to work, so they standardized on Docker. The fight went out of the true believers who pushed against such trends.
This is a common pattern when people complain about egregious waste in computing. Look closely and you'll find the waste often has a sort of ideological basis to it. Some powerful group of people became subsidized so they could remain committed to a set of technical ideas regardless of the needs of the user base. Eventually people find a way to hack around them, but in an uncoordinated, undesigned and mostly unfunded fashion. The result is a very MVP set of technologies.
The dumpster fire at the bottom of that is libc and the C ABI. Practically everything is built around the assumption that software will be distributed as source code and configured and recompiled on the target machine because ABI compatibility and laying out the filesystem so that .so's could even be found in the right spot was too hard.
To quote Wolfgang Pauli, this is not just not right, it's not even wrong ...
The "C ABI" and libc are a rather stable part of Linux. Changing the behaviour of system calls ? Linus himself will be after you. And libc interfaces, to the largest part, "are" UNIX - it's what IEEE1003.1 defines. While Linux' glibc extends that, it doesn't break it. That's not the least what symbol revisions are for, and glibc is a huge user of those. So that ... things don't break.
Now "all else on top" ... how ELF works (to some definition of "works"), the fact stuff like Gnome/Gtk love to make each rev incompatible to the prev, that "higher" Linux standards (LSB) don't care that much about backwards compat, true.
That, though, isn't the fault of either the "C ABI" or libc.
Things do break sadly, all the time, because the GNU symbol versioning scheme is badly designed, badly documented and has extremely poor usability. I've been doing this stuff for over 20 years now [1] [2], and over that time period have had to help people resolve mysterious errors caused by this stuff over and over and over again.
Good platforms allow you to build on newer versions whilst targeting older versions. Developers often run newer platform releases than their users, because they want to develop software that optionally uses newer features, because they're power users who like to upgrade, they need toolchain fixes or security patches or many other reasons. So devs need a "--release 12" type flag that lets them say, compile my software so it can run on platform release 12 and verify it will run.
On any platform designed by people who know what they're doing (literally all of the others) this is possible and easy. On Linux it is nearly impossible because the entire user land just does not care about supporting this feature. You can, technically, force the GNU ld to pick a symbol version that isn't the latest, but:
• How to do this is documented only in the middle of a dusty ld manual nobody has ever read.
• It has to be done on a per symbol basis. You can't just say "target glibc 2.25"
• What versions exist for each symbol isn't documented. You have to discover that using nm.
• What changes happened between each symbol isn't documented, not even in the glibc source code. The header, for example, may in theory no longer match older versions of the symbols (although in practice they usually do).
• What versions of glibc are used by each version of each distribution, isn't documented.
• Weak linking barely works on Linux, it can only be done at the level of whole libraries whereas what you need is symbol level weak linking. Note that Darwin gets this right.
And then it used to be that the problems would repeat at higher levels of the stack, e.g. compiling against the headers for newer versions of GTK2 would helpfully give your binary silent dependencies on new versions of the library, even if you thought you didn't use any features from it. Of course everyone gave up on desktop Linux long ago so that hardly matters now. The only parts of the Linux userland that still matter are the C library and a few other low level libs like OpenSSL (sometimes, depending on your language). Even those are going away. A lot of apps now are being statically linked against muslc. Go apps make syscalls directly. Increasingly the only API that matters is the Linux syscall API: it's stable in practice and not only in theory, and it's designed to let you fail gracefully if you try to use new features on an old kernel.
The result is this kind of disconnect: people say "the user land is unstable, I can't make it work" and then people who have presumably never tried to distribute software to Linux users themselves step in to say, well technically it does work. No, it has never worked, not well enough for people to trust it.
> How to do this is documented only in the middle of a dusty ld manual nobody has ever read.
This got an audible laugh out of me.
> Good platforms allow you to build on newer versions whilst targeting older versions.
I haven't been doing this for 20 years (13), but I've written a fair amount of C. This, among other things, is what made me start dabbling with zig.
~ gcc -o foo foo.c
~ du -sh foo
16K foo
~ readelf -sW foo | grep 'GLIBC' | sort -h
1: 0000000000000000 0 FUNC GLOBAL DEFAULT UND __libc_start_main@GLIBC_2.34 (2)
3: 0000000000000000 0 FUNC GLOBAL DEFAULT UND puts@GLIBC_2.2.5 (3)
6: 0000000000000000 0 FUNC GLOBAL DEFAULT UND __libc_start_main@GLIBC_2.34
6: 0000000000000000 0 FUNC WEAK DEFAULT UND __cxa_finalize@GLIBC_2.2.5 (3)
9: 0000000000000000 0 FUNC GLOBAL DEFAULT UND puts@GLIBC_2.2.5
22: 0000000000000000 0 FUNC WEAK DEFAULT UND __cxa_finalize@GLIBC_2.2.5
~ ldd foo
linux-vdso.so.1 (0x00007ffc1cbac000)
libc.so.6 => /usr/lib/libc.so.6 (0x00007f9c3a849000)
/lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x00007f9c3aa72000)
~ zig cc -target x86_64-linux-gnu.2.5 foo.c -o foo
~ du -sh foo
8.0K foo
~ readelf -sW foo | grep 'GLIBC' | sort -h
1: 0000000000000000 0 FUNC GLOBAL DEFAULT UND __libc_start_main@GLIBC_2.2.5 (2)
3: 0000000000000000 0 FUNC GLOBAL DEFAULT UND printf@GLIBC_2.2.5 (2)
~ ldd foo
linux-vdso.so.1 (0x00007ffde2a76000)
libc.so.6 => /usr/lib/libc.so.6 (0x0000718e94965000)
/lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x0000718e94b89000)
edit: I haven't built anything complicated with zig as I have with the other c build systems, but so far it seems to have some legit quality of life improvements.
Interesting that zig does this. I wonder what the binaries miss out on by defaulting to such an old symbol version. That's part of the problem of course: finding that out requires reverse engineering the glibc source code.
I'd only like to add one thing here ... on static linking.
It's not a panacea. For non-local applications (network services), it may isolate you from compatibility issues, but only to a degree.
First, there are Linux syscalls with "version featuritis" - and by design. Meaning kernel 4.x may support a different feature set for the given syscall than 5.x or 6.x. Nothing wrong with feature flags at all ... but a complication nonetheless. Dynamic linking against libc may take advantage of newer features of the host platform whereas the statically linked binary may need recompilation.
Second, certain "features" of UNIX are not implemented by the kernel. The biggest one there is "everything names" - whether hostnames/DNS, users/groups, named services ... all that infra has "defined" UNIX interfaces (get...ent, get...name..., ...) yet the implementation is entirely userland. It's libc which ties this together - it makes sure that every app on a given host / in a given container gets the same name/ID mappings. This does not matter for networked applications which do not "have" (or "use") any host-local IDs, and whether the DNS lookup for that app and the rest of the system gives the same result is irrelevant if all-there-is is pid1 of the respective docker container / k8s pod. But it would affect applications that share host state. Heck, the kernel's NFS code _calls out to a userland helper_ for ID mapping because of this. Reimplement it from scratch ... and there is absolutely no way for your app and the system's view to be "identical". glibc's nss code is ... a true abyss.
Another such example is (another "historical" wart) timezones or localization. glibc abstracts this for you, but language runtime reimplementations exist (like the C++2x date libs) that may or may not use the same underlying state - and may or may not behave the same when statically compiled and the binary run on a different host.
Static linking "solves" compatibility issues also only to a degree.
It's providing backwards compatibility (by symbol versioning). And that way allows for behaviour to evolve while retaining it for those who need that.
I would agree it's possibly messy. Especially if you're not willing or able to change your code providing builds for newer distros. That said though... ship the old builds. If they need it only libc, they'll be fine.
(the "dumpster fire" is really higher up the chain)
> Practically everything is built around the assumption that software will be distributed as source code
Yup, and I vendor a good number dependencies and distribute source for this reason. That and because distributing libs via package managers kinda stinks too, it's a lot of work. Id rather my users just download a tarball from my website and build everything local.
I don't think that users expect developers to maintain packages for every distro. I had to compile ffmpeg lately for a debian installation and it went without an hitch. Yes, the average user is far away from compiling packages, but they're also far away from random distributions.
Now imagine same but with AI killer bot swarms. Slaughterbots. Because we could !
As long as we have COMPETITION as the main principle for all tech development — between countries or corporations etc. — we will not be able to rein in global crises such as climate change, destruction of ecosystems, or killer AI.
We need “collaboration” and “cooperation” at the highest levels as an organizing principle, instead. Competition causes many huge negative externalities to the rest of the planet.
What we really need is some way to force competition to be sportsmanlike. EG: cooperating to compete, just like well adjusted competitors in a friendly tournament who actually care about refining their own skills and facing a challenge from others who feel the same way instead of cutting corners and throats to get ahead.
Cooperation with no competition subtracts all urgency because one must prioritize not rocking the boat and one never knows what negative consequences any decision one makes might prove to have. You need both forces to be present, but cooperation must also be the background/default touchstone with adversarial competition employed as a tool within that framework.
I don’t see any urgency in depleting ecosystems or building AI quickly or any other innovations besides ones to safeguard the environment, including animals.
Human society has developed far slower throughout all history and prehistory, and that was OK. We’ve solved child mortality and we are doing just fine. But 1/3 of arable farmland is now desertified, insect populations are plummeting etc.
Urgency is needed the other way — in increasing cooperation. As we did ONE TIME with the Montreal Protocol and almost eliminated CFCs worldwide to repair the hole in the ozone layer
I like this viewpoint of "cooperate to compete". It's what we've been doing on a global scale as ~all nations have agreed to property rights, international trade, and abiding by laws they've written down. And in fact some would say that at the largest business scale, there is this cooperation--witness the collusion between AAPL/GOOG/etc not to poach each others' employees. But there doesn't seem to be the same respect for "smaller" businesses, as they are viewed as prey instead of weaker hunters.
You're right, but it's not just tech development, it's pervasive throughout our civilization. And solving it requires solving it almost everywhere, at close to the same time.
I disagree. It’s all the frameworks and security features like telemetry of the operating systems and those framework libraries. There are programs written in Lazarus (free pascal) that run blazing fast on windows, even the modern ones like Windows 11. Keeping the software written for a specific purpose for the desktop is the best bet for quickness and stability.
Every modernization (hardware and framework) in software is a tax on the underlying software in its functional entirety
It wasn't supposed to be like this but it looks like most people never have found the way by now.
So, misguided efforts, wasted resources, and technical debt piles up like never before, and at an even faster rate than efficiency of the software itself declines on the surface.
Moore's law is still going, but we stopped making software slower.
We use JITs and GPU acceleration and stuff in our mega frameworks, and maybe more importantly, we kind of maxed out the amount of crazy JS powered animations and features people actually want.
Well, except backdrop filter. That still slows everything down insanely whenever it feels like it.