Hacker News new | past | comments | ask | show | jobs | submit login
Leaving Rust gamedev after 3 years (loglog.games)
1481 points by darthdeus 10 days ago | hide | past | favorite | 965 comments





That's a good article. He's right about many things.

I've been writing a metaverse client in Rust for several years now. Works with Second Life and Open Simulator servers. Here's some video.[1] It's about 45,000 lines of safe Rust.

Notes:

* There are very few people doing serious 3D game work in Rust. There's Veloren, and my stuff, and maybe a few others. No big, popular titles. I'd expected some AAA title to be written in Rust by now. That hasn't happened, and it's probably not going to happen, for the reasons the author gives.

* He's right about the pain of refactoring and the difficulties of interconnecting different parts of the program. It's quite common for some change to require extensive plumbing work. If the client that talks to the servers needs to talk to the 2D GUI, it has to queue an event.

* The rendering situation is almost adequate, but the stack isn't finished and reliable yet. The 2D GUI systems are weak and require too much code per dialog box.

* I tend to agree about the "async contamination" problem. The "async" system is optimized for someone who needs to run a very large web server, with a huge number of clients sending in requests. I've been pushing back against it creeping into areas that don't really need it.

* I have less trouble with compile times than he does, because the metaverse client has no built-in "gameplay". A metaverse client is more like a 3D web browser than a game. All the objects and their behaviors come from the server. I can edit my part of the world from inside the live world. If the color or behavior or model of something needs to be changed, that's not something that requires a client recompile.

The people using C# and Unity on the same problem are making much faster progress.

[1] https://video.hardlimit.com/w/7usCE3v2RrWK6nuoSr4NHJ


> I'd expected some AAA title to be written in Rust by now.

I'm disinclined to believe that any AAA game will be written in Rust (one is free to insert "because Rust's gamedev ecosystem is immature" or "because AAA game development is increasingly conservative and risk-averse" at their discretion), yet I'm curious what led you to believe this. C++ became available in 1985, and didn't become popular for gamedev until the turn of the millenium, in the wake of Quake 3 (buoyed by the new features of C++98).


Lamothe's Black Art book came out in '95. Abrash's black book came out in '97.

Borland C++ was pretty common and popular in 93 and we even had some not-so-great C++ compilers on Amiga in 92/93 that had some use in gamedev.

SimCity 2000 was written in C++, way back in '93 (although they started with Cfront)

An absolute fuckton of shareware games I was playing in the 90s were built with Turbo C++.


Kind of true, however they had endless amounts of inline Assembly, as shown on the Black Book as well.

I know of at least a MS-DOS game, published on Portuguese Spooler magazine, that was using Turbo C++ basically as a macro assembler.

One of the PlayStation selling points for developers was being the first home console with a C SDK, while SEGA and Nintendo were still doing Assembly, C++ support only came later to the PlayStation 2.

While I agree C++, BASIC, Turbo Pascal, AMOS were being used a lot, specially in the Demoscene, they were our Unity, from the point of view of successful game studios.


I also remember by videogame magazines I was reading back in early 90s that another C++ compiler that was a favourite among devs was Watcom C++ that was released in 88.

That doesn't mean that it was used primarily with C++ though. IIRC Watcom C/C++ mainly became popular because of Doom, and that was written in C (as all id games until Doom 3 in 2004 - again IIRC though).

The actual killer feature of Watcom C/C++ was not the C or C++ compiler, but its integration with DOS4GW.


Btw, dont’t remember Turbo C or Borland C++ to be able to compile to 32-bit x86 on DOS

Borland C++, Microsoft C/C++, and GCC (DJGPP[1]) could all target 32-bit extended DOS, but Watcom was the first[2] to bundle a royalty-free DOS extender[3].

[1] https://news.ycombinator.com/item?id=39038095

[2] https://www.os2museum.com/wp/watcom-win386/

[3] https://en.wikipedia.org/wiki/DOS_extender


OMG, the name "Watcom" just opened a flood of nineties memories of the demo scene for me. Thanks for mentioning.

I really hope that C++ evolves with gamedev and they become more and more symbiotic.

Maybe adoption of rust by gamedev community isn't the best thing to wish to happen to language. Maybe it is better to let other crowd to steer evolution of rust, letting system programming and gamedev drift apart


I think I don't know a single gamdev who's fond of "modern C++" or even the C++ stdlib in general (and stdlib changes is what most of "modern C++" is about). the last good version was basically C++11. In general the C++ committee seems to be largely disconnected from reality (especially now that Google seems to be doing its own C++ successor, but even before, Google's requirements are entirely different from gamedev requirements).

C++17/20 are light-years beyond C++11 in terms of ergonomics and usability. Metaprogramming in C++11 is unrecognizable from C++20 things have improved so much. I hated C++ before C++11 but now C++11 feels quite legacy compared to even C++17. The ability to write almost anything, like a logging library, without C macros is a huge improvement for maintainability and robustness.

Most of the features in modern C++ are designed to enable writing really flexible and highly optimized libraries. C++ rarely writes those libraries for you.


Heh, mentioning metaprogramming and logging is not exactly how you convince anybody of superior ergonomics and usability.

Metaprogramming is required to get typesafe easy to use code. The problem of most template code is that the implementation gets horrendously complicated but for the user it can create A LOT of comfort. At work for example, I wrote a function that calls an rpc-method and it has a few neat features like:

An rpc call with a result looks like this:

call(<methodinfo>, <param>, [](Result r) {});

vs one which returns void:

call(<methodinfo>, <param>, []() {});

It's neat that the callback reflects that, but this wouldn't be possible without some compiletime magic.


It convinced me

Hi, I'm a game developer and I'm fond of "modern C++" and the stdlib. Sure, I would like some priorities to be different (i.e. we should have had static reflection a while ago), but it's still moving in the right direction.

Particularly the idea that "the last good version was basically C++11" is exactly what I would expect to hear from someone who reads a few edgy articles on the internet but has no actual in-depth experience working with the language. C++14 and 17 are, for a large part, plain ergonomic upgrades over C++11, with lots of minor but impactful additions and improvements all over. I can't even think of anything in those two versions that would be sufficiently controversial to make anyone prefer C++11 over them, or call it the "last good version".

C++20 is obviously a larger step, and does include a few more controversial changes, but those are completely optional (and I don't expect many of them to be widely adopted in gamedev for a decade at least, even though for some I wish it went more quickly).


> stdlib changes is what most of "modern C++" is about). the last good version was basically C++11.

I can only comment this like: tell me you have no idea about current state of C++ without telling me you have no idea about current state of C++.


Then let's hear some counter examples please. As far as I'm aware the last important language change since C++11 was designated init in C++20, and that's been butchered so much compared to C99 that it is essentially useless for real world code.

There a whole bunch of features and fixes that each new version of the standard proclaimed, which severely affected usability, expressibility and convenience of the language. Describing many of them could easily take an hour. I'm sorry, I can only highlight a few of my particular favourites that I regularly use and let you study the rest changes.

https://en.cppreference.com/w/cpp/14

- fixed constexpr, which in C++11 was basically unusable

- great improvements for metaprogramming, which made such gems as `boost::hana` possible, such as variable templates and generic lambdas.

- function return type deduction

https://en.cppreference.com/w/cpp/17

- inline variables finally fixes the biggest pain of developing header-only libraries

- useful noexcept fix

- if constexpr + constexpr lambdas

- structured bindings

- guaranteed copy elision

- fold expressions

I'm at automotive where due to safety requirements we just barely started to work with C++17, so I don't have much practical experience of the standards past it, though I'm aware there are great updates too. Overall - C++11 is as horrible compared to C++17, as C++98 and roughly 03 were compared to ground breaking back then C++11. Personally, when I skim though job vacancies and see they are stuck at C++11, I pass it. Even C++14 makes me very sceptical, even though I used it really a lot. All due to new nice improvements of C++17.

https://en.cppreference.com/w/cpp/20

https://en.cppreference.com/w/cpp/23


Ok, I'll give you fold expressions and structured bindings as actually important language updates. The rest are mostly just tweaks that plug feature gaps which shouldn't have existed in the first place when the basic feature was introduced in C++11 or earlier.

IMHO by far most things which the C++ committee accepts as stdlib updates should actually be language changes (like for instance std::tuple, std::variant or std::range). Because as stdlib features those things make C++ code more and more unreadable compared to "proper" syntax sugar (Rust suffers from the exact same problem btw).


He missed concepts and modules which are also c++20 features, modules are just not properly supported (yet). Concepts are a massive QoL feature and modules might help with compile times.

> IMHO by far most things which the C++ committee accepts as stdlib updates should actually be language changes

From my experience thats not how the c++ committee works. They generally decompose requested features into the smallest building blocks and just include those in the language and let the rest be handled by the stdlib.

The thing that makes C++ unreadable in my opinion is template code and the fact that the namespace system sucks and just leads to unreadably long names (std::chrono::duration_cast<std::chrono::milliseconds>(.....)).


[flagged]


You should probably tone done your speech, and lay off the patronizing attitude, no matter how well justified are your artguments.

Oh I followed the C++ standardization process quite closely for about 15 years up until around C++14 and still follow it from the sidelines (having mostly switched back to C since then), and I'm fully aware of the fact that C++ has designed itself into a complexity corner where it is very hard to add new language features (after all, C++ has added more new problems that had then to be fixed in later standards than it inherited from C in the first place).

I still think the C++ committee should mainly be concerned about the language instead of shoehorning stuff into the stdlib, even if fixing the language is the harder problem.

And I can't be alone in this frustration, otherwise Carbon, Circle and Herb Sutter's cppfront wouldn't have happened.


It's even worse than that, because even if a new proposal had no concerns from a language & library point of view, it can still be crippled by vendor concerns because of short-sighted, entirely unforced errors the vendors made, often decades prior.

It's part of why I don't believe the C++-compatible C++-successor languages will deliver on their promises nearly as well as they think. They only solve half of the problem, which is that their translation units don't have to accommodate legacy C++ syntax.

They still have to reproduce existing C++ semantics and ABIs, their types still have to satisfy C++ SFINAE and Concepts, etc. so they're bringing all of the semantic baggage no matter what new syntax they dress it in.

And anywhere they end up introducing new abstractions to try to enforce safety, those will be incompatible with C++ enough to require hand-crafted wrappers, just like we already do with Rust, only Rust is much further along its own maturity and adoption curve than those languages are.


A practical example on C++14 & its constexpr+variable templates fixes, and why this was important: a while ago I wrote a wrapper over a compile-time fixed size array that imposed a variable compile-time fixed tensor layout on it. Basically, it turned a linear array into any matrix, or 3D or 4D or whatever -D is needed tensor and allowed to efficiently work with them in compile time already. There was obviously constexpr constuction + constexpr indexing + some constexpr tensor operations. In particular there was a constexpr trace operation for square matrices (a sum of the elements on the main diagonal, if I'm not mistaken). I decided to showcase the power of constexpr to some juniors in the team. For some reason, I thought that since the indexing operation is constexpr, then computing the matrix trace would require a compiler to just take elements of the matrix at precomputed at compile time addresses, which will be seen in the disassembly as memory loads from fixed offsets (without computing these offsets in runtime, since matrix layout is fixed in a compile time and index computation is constexpr operation). So I quickly wrote an example, compiled it with asm output, and looked at it... It was a facepalm moment - I forgot that trace() was also constexpr, so instead of doing any runtime computations at all, the code just had already computed trace value as a constant in a register. How is it not cool? Awesome!

Such things are extremely valueable as they allow to write much more expressive and easy to understand and maintain code for entities known in a compile time.


I sometimes wonder if the problem with rust is that we have not yet had a major set of projects which drive solutions to common dev problems.

Go had google driving adoption, which in turn drove open source efforts. The language had to remain grounded to not interfere with the doing of building back-end services.

Rust had mozilla/servo which was ultimately unsuccessful. While there are more than a few companies uinf rust for small projects with tough performance guarantees - I haven't seen the “we manage 1-10 MM sloc of complex code using rust” type projects.


Microsoft is rewriting quite a bit of their C# to Rust for performance reasons. Especially within their business line products. Rust have also become rather massive in the underlying tech in the telecommunications infra structure in several countries.

So I’m not sure that your take is really so on point. Especially as far as comparing it with Go goes (heehee), at least not in terms of 3rd party libraries where most of the Go ecosystems seems to be either maintained by one or two people or abandoned as those two people got new jobs. I think Go is cool by the way, but there is a massive difference in the maturity of the sort of libraries we looked into using during our PoCs.

Anyway. A lot of Rust adoption is a little quiet, and well, rather boring. So maybe that’s why you don’t hear too much about it.


Quiet adoption often means that a couple people in a company chose to invest in at least a small effort. It's unknown if those people would do it again, and they are unlikely to invest 2-3 devs to improve the rust library and language ecosystem.

Major adoption gets you tools like guice, 50+ person tools teams, and more.


Microsoft rewrote one, maybe two microservices as it was driven by a lead interested in using Rust and is rewriting parts of NT kernel (way more important).

It’s much more than that, even now they are continuously opening job postings with a focus on re-writing the 365 platform from C# to Rust.

It’s a bad habit to read too much into a single job posting.

(oh, I remember now, it’s the account traumatized by odata)


I’m not sure why you’re trying to make it seem like Microsoft isn’t rewriting the core of their 365 business products from C# to Rust, but you do you I guess.

As far as I’m aware I was never traumatised by OData. It’s true that I may have ranted about the sorry state of the public packages available outside of C# or Java. Not unwarranted criticism I think, but I wrote our own internal adaptation which now powers basically all our API clients for Typescript as a single shared no-dependency library.

But you seem to think you know me? Have we met?


Alright, if not for that one job posting, I’m curious where you are getting this information from?

I really think the problem of Rust is the borrow checker. Seriously. It is good but it is overkill. You have to do and plan all things around it and discourages a lot of patterns or makes them really difficult to refactor.

I would encourage people to understand Hylo's object model and mutable value semantics. I thinks something like that is far better, more ergonomic and very well-performing (in theory at least).


You can use unsafe code and pointers if you really want, but code will be unsafe, like C or C++.

Look at Hylo. Tell me what you think. You do not need all that juggling. Just use value semantics with lazy copying. The rest is handled for you. Without GC. Without dangling pointers.

TBF, unsafe Rust still enforces much more correctness than C or C++ (Rust's "unsafety" is more similar to Zig than C or C++).

TBF this is not really true. Unsafe Rust is a lot harder than comparable C/C++, because it must manually uphold all safety invariants of Safe Rust whenever it interacts with idiomatic Rust code. (These safety invariants are also why Safe Rust can often be compiled into better-optimized code than the idiomatic C/C++ equivalent.)

With more enforced correctness of Rust (also unsafe Rust) I mean small details like Rust not allowing implicit conversion between integer types. That alone eliminates a pretty big source of hidden bugs both in C and C++ (especially when assigning a wider to a narrower type, or mixing signed and unsigned integers).

All in all I'm not a big fan of Rust, but details like this make a lot of sense (even if they may appear a bit draconic at first) - although IMHO Zig has a slightly better solution by allowing implicit conversions that do not lose information. E.g. assigning a narrower to a wider unsigned integer type works, but not the other way around.


I wonder if Rust is killing flies with canons (as we say in spanish). There are perfectly safe alternatives or very safe ones.

Even in a project coded in Modern C++ with async code included, activating all warnings (it is a cards game) I found two segfaults in like almost 5 years... It can happen, but it is very rare at least with my coding patterns.

The code is in the tens of thousands of lines of code I would say, not sure 100%, will measure it.

Is it that bad to put one share pointer here and there and stick to unique pointers and try to not escape references? This is ehat I do and I use spans and string views carefully (you must with those!). I stick to the rule of zero. With all that it is not that difficult to have mostly safe code in my experience. I just use safe subsets except in a handful of places.

I am not saying C++ is better than Rust. Rust is still safer. What I am saying is that an evolution of the C++ model is much more ergonomic and less viral than this ton of annotations with a steep learning curve where you spend a good deal of your time fighting the borrow checker. So my question is:

- when it stops being worth to fight the borrow checker and just replace it with some alternative, even smart pointers here and there? Bc it seems to have a big viral cost and refactoring cost besides preventing valid patterns.


> What I am saying is that an evolution of the C++ model is much more ergonomic and less viral than this ton of annotations with a steep learning curve where you spend a good deal of your time fighting the borrow checker. So my question is:

That "evolution of the C++ model" (the C++ Core Guidelines) has an even steeper learning curve than Rust itself, and even more invasive annotations if you want to apply it across the board. There is no silver bullet, and Rust definitely has the more principled approach to these issues.


I'm not answering your question here, just saying my opinion on C++ vs Rust. I think that the big high-level difference (before diving into details like ownership and the borrow checker) is that C++'s safety is opt-in, while Rust's safety is opt-out. So in C++ you have to be careful each time you allocate or access memory to do it in a safe way. If you're working in a team, you all have to agree on the safe patterns to use and check that your team members are sticking with it during code rewiews. Rust takes this burden from you, at the expense of having to learn how to cooperate with the borrow checker.

So, going back to your question, I think that the answer is that it depends on many factors, including also some non-strictly-technical ones like the team's size.


An evolution of the C++ model could be something like Hylo. Hylo is safe. Hylo does not need a borrow checker. Hylo does not need a garbage collector.

That is what I mean by evolution. I do not mean necessarily C++ with Core Guidelines.


I think you replied to the wrong reply.

Unsafe Rust is not harder or safer than C/C++. If you can uphold all safety invariants for C/C++ code (OMG!), then it will be easier to do same thing for unsafe Rust, because Rust has better ergonomic.

Better ergonomics for what? For refactoring with a zillion lifetime annotations? Annotations go viral down the stack call. That is a headache. Not useless. I know it is useful. Just a headache, a price to pay. For linked structures? For capturing an exception.

No, it is not more ergonomic. It is safer. That's it.

And some parts of that enforcement via this model is terribly unergonomic.


? I believe the Rust efforts in Firefox were largely successful. I think Servo was for experimental purposes and large parts were then added to Firefox with Quantum: https://en.wikipedia.org/wiki/Gecko_(software)#Quantum

My recollection was that those were separate changes - servo didn’t get to the stage where it could be merged, but it was absolutely the plan to build a rendering engine that outperformed every other browser before budget cuts hit.

We did port Servo’s WebRender to Firefox and shipped it everywhere. The only caveat is that it took multiple years of upgrades, fixes, and rewriting it.

It would be interesting to hace a postmortem of what went well, wrong, etc. for this initial effort.

I believe work continues now somewhere else but it would be absolutely nice to know more from the experience from others.


> Go had google driving adoption

This is commonly said but I think it's only correct in the sense that Google is famous and Google engineers started it.

Google never drove adoption; it happened organically.


> Rust had mozilla/servo which was ultimately unsuccessful.

There's lots of Rust code in Firefox!

> I haven't seen the “we manage 1-10 MM sloc of complex code using rust” type projects.

Meta has a lot of Rust internally.

The problems with Rust for high-level indie game dev logic, where you're doing fast prototyping, are very specific to that domain, and say very little about its applicability in other areas.


Servo is an ongoing project, it has not "failed" or been unsuccessful in any sense.

I think the original poster is perhaps speaking to previous articles (ie https://news.ycombinator.com/item?id=39269949) which from the outside looking in made me feel that perhaps this infact was the case (at least for a period).

Exactly, it's all about the ecosystem and very little about the language features

Kind of both in my opinion. But rust is bringing nothing to the table that games need.

At best rust fixes crash bugs and not the usual logic and rendering bugs that are far more involved and plague users more often.


The ability of engines like Bevy to automatically schedule dependencies and multithread systems, which relies on Rust's strictness around mutability, is a big advantage. Speaking as someone who's spent a long time looking at Bevy profiles, the increased parallelism really helps.

Of course, you can do job queuing systems in C++ too. But Rust naturally pushes you toward the more parallel path with all your logic. In C++ the temptation is to start sequential to avoid data races; in systems like Bevy, you start parallel to begin with.


Aside from a physics simulation, I'm curious as to what you think would be a positive cost benefit from that level of multithreading for the majority of game engines. Graphical pipelines take advantage of the concept but offload as much work as possible to the GPU.

We were doing threading beyond that in 2010, you could easily have rendering, physics, animation, audio and other subsystems chugging along on different threads. As I was leaving the industry most engines were trending towards very parallel concurrent job execution systems.

The PS3 was also an interesting architecture(i.e. SPUs) from that perspective but it was so distant from the current time that it never really took off. Getting existing things ported to it was a beast.

Bevy really nails the concurrency right IMO(having worked on AA/AAA engines in the past) it's missing a ton in other dimensions but the actual ECS + scheduling APIs are a joy. Last "proper" engine I worked on was a rats-nest of concurrency in comparison.

That said as a few other people pointed out, the key is iteration, hot-reload and other things. Given the choice I'd probably do(and have done) a Rust based engine core where you need performance/stability and some dynamic language on top(Lua, quickjs, etc) for actual game content.


> That said as a few other people pointed out, the key is iteration, hot-reload and other things. Given the choice I'd probably do(and have done) a Rust based engine core where you need performance/stability and some dynamic language on top(Lua, quickjs, etc) for actual game content.

I fully agree that this will likely be the solution a lot of people want to go with in Bevy: scripting for quick iteration, Rust for the stuff that has to be fast. (Also thank you for the kind words!)


Yeah, it's a fairly clean and natural divide. You see it in most of the major engines and it was present in all the proprietary engines I worked on(we mostly used Lua/LuaJIT since this predated some great recent options like quickjs).

We even had things like designers writing scripts for AI in literate programming with Lua using coroutines. We fit in 400kb of space for code + runtime using Lua on the PSP(man that platform was a nightmare but the scripting worked out really well).

Rust excels when you know what you want to build, and core engine tech fits that category pretty cleanly. Once you get up in game logic/behavior that iteration loop is so dynamic that you are prototyping more than developing.


In big-world high-detail games, the rendering operation wants so much time that the main thread has time for little else. There's physics, there's networking, there's game movement, there's NPC AI - those all need some time. If you can get that time from another CPU, rendering tends to go faster.

I tend to overdo parallelism. Load this file into a Tracy profile, version 0.10.0, and you can see what all the threads in my program are doing.[1] Currently I'm dealing with locking stalls at the WGPU level. If you have application/Rend3/WGPU/Vulkan/GPU parallism, every layer has to get it right.

Why? Because the C++ clients hit a framerate wall, with the main thread at 100% and no way to get faster.

[1] https://animats.com/sl/misc/traces/clockhavenspeed02.tracy


Animations are an example. I landed code in Bevy 0.13 to evaluate all AnimationTargets (in Unity speak, animators) for all objects in parallel. (This can't be done on GPU because animations can affect the transforms of entities, which can cause collisions, etc. triggering arbitrary game logic.) For my test workload with 10,000 skinned meshes, it bumped up the FPS by quite a bit.

"Fearless concurrency"

C++ classes with inheritance are a pretty good match for objects in a 3D (or 2D) world, which is why C++ became popular with 3D game programmers.

This is not at all my experience.

What I have experienced is that C++ classes with inheritance are good at modeling objects in a game at first, when you are just starting and the hierarchy is super simple. Afterwards, it isn’t a good match. To can try to hack around this in several ways, but the short version of it is that if your game isn’t very simple you are better off starting with an Entity Component System setup. It will be more cumbersome to use than the language-provided features at first, but the lines cross very quickly.


I like the Javascript way of objects just having fully mutable keys/values like dictionaries, with no inheritance or static typing.

Hmm no not really in my experience. Even the old "Entities and Components" system in Unity was better, because it allowed to compose GameObject behaviour by attaching Component objects, and this system was often replicated in C++ code bases until it "evolved" into ECS.

This is how I feel about golang and systems programming. The strong concurrency primitives and language simplicity make it easier to write and reason about concurrent code. I have to maintain some low level systems in python and the language is such a worse fit for solving those problems.

Yeah, OOP makes sense for games. The language will matter a bit for which one takes off, but anything will work given enough support. Like, Python doesn't inherently make a lot of sense for data processing or AI, but it's good enough.

OOP kind of goes out the window when people start using entity component systems. Of course, like the author, I'm not sure I'll need ECS since I'm not building a AAA game.

Had to look up ECS to be honest, and it's pretty much what I already do in general dev. I don't care to classify things, I care what I can do with something. Which is Rust's model.

Interfaces or traits are not ECS though. ECS is mostly concerned about how data is layed out in memory for efficient processing. The composability is (more or less) just a nice side effect.

This is correct. I wonder how Rust models SoA wirh borrowing. Is it doable or becomes very messy?

I usually have some kind of object that apparently looks like OOP but points all its features to the SoA. All that would be borrowing and pointing somewhere else in slices or similar in Rust I assume?


AFAIK tagged-index-handles are typically used for this (where the tag is a generation-counter to detect 'dangling handles'), which more or less side-steps the borrow checker restrictions (e.g. see https://floooh.github.io/2018/06/17/handles-vs-pointers.html).

Sorry I got lost in that sentence. What is Rust's model?

Rust has traits on structs instead of using inheritance. Aka composition.

Even PHP as traits by now. Languages tend to incorporate others Languages successful features. There is of course feature inflation risk of course. There are Languages that take as a goal to avoid that inflation, such as Zig, or that arrives there as a byproduct of being very focused in a specific use case like AWK.

AFAIK composition, in the traditional sense, means that you put your objects/concepts together from different smaller objects or concepts. Composition would be to have a struct Car that uses another struct called Engine to handle its driving needs. A car “has a” engine. A trait that implements the “this thing has an engine” behavior isn’t composition, it’s actually much closer to [multiple] inheritance (a car “is a” motorized vehicle).

Traits do implement interface inheritance, but that doesn't have the same general drawbacks as implementation inheritance (such as the well-known "fragile base class" problem).

I don't know the terminology. I just know that Rust does whatever the alternative is to the Java way with inheritance. You don't get stuck with the classic classification problem.

But that... wasn't in your comment at all...

If I say "I don't care about safety, I care about expressiveness. Which is Rust's model"... "which" has to refer to one of the other things I just mentioned (safety or expressiveness) not some other concept.


You can also have structs be generic over some "tag" type, which when combined with trait definitions gets you quite close to implementation inheritance as seen in C++ and elsewhere. It's just less common because usually composition is all that's required.

To be clear, the reason why Python is so popular for data wrangling (including ML/AI) is not due to the language itself. It is due to the popular extensions (libraries) exclusively written in C & C++! Without these libraries, no one would bother with Python for these tasks. They would use C++, Java, or .NET. Hell, even Perl is much faster than Python for data processing using only the language and not native extensions.

Python makes sense because of accessibility and general comfort for relatively small code bases with big data sets.

Those data scientists at least from my experience are more into math/business than interested in most efficient programming.

Or at least that was the situation at first and it sticked.


Disagree the adoption of C++ was more about Moore's law than ecosystem, although having compilers that were beginning to not be completely rubbish also helped.

Also C++ could be adopted incrementally by C developers. You could use it as “C with classes”, or just use operator overloading to make vector math more tolerable, or whatever subset that you happened to like.

So there’s really three forces at play in making C++ the standard:

1) The Microsoft ecosystem. They literally stopped supporting C by not adopting the C99 standard in their compiler. If you wanted any modern convenience, you had to compile in C++ mode. New APIs like Direct3D were theoretically accessible from C (via COM) but in practice designed for C++.

2) Better compilers and more CPU cycles to spare. You could actually count on the compiler to do the right thing often enough.

3) Seamless gradual adoption for C developers.

Rust has a good compiler, but it lacks that big ticket ecosystem push and is not entirely trivial for C++ developers to adopt.


I'd say Rust does have that big ticket ecosystem push. Microsoft has been embracing Rust lately, with things like official Windows bindings [1].

The bigger problem is just inertia: large game engines are enormous.

[1]: https://github.com/microsoft/windows-rs


Repo contributor here, just to curb some expectations a bit: it's one very smart guy (Kenny), his unpaid volunteer sidekick (me), and a few unpaid external contributors. (I'm trying to draw a line between those with and without commit access, hence all the edits.)

There's no other internal or external Microsoft /support/ that I'm aware of. I wouldn't necessarily use it as a signal of the company's intentions at this time.

That said, there are Microsoft folks working on the Rust compiler, toolchain, etc. side of things too. Maybe those are better indicators!


That's disappointing on Microsoft's part, because their docs make it seem like windows-rs is the way of the future.

Thanks for your work, though!


Don't be, they also killed C++/CX, even went to CppCon 2016 telling us how great future C++/WinRT would bring to us.

Now almost a decade later, VS tooling is still not there, stuck in ATL/VC++ 6.0 like experience (they blame it on the VS team), C++/WinRT is in maintenance, only bug fixes, and all the fun is on Rust/WinRT.

I would never trust this work for production development.


I wish Microsoft had any direction on the 'way of the future' for native apps on Windows

If they did publish a “way of the future” direction, would you believe them?

Fool me N times then shame on them, fool me N+1 times, then shame on me sort of thing.


The most infuriating thing is their habit of rebuilding things just about the time they reach a mature and highly stable state, creating an entirely new unstable and unreliable system. And then the time that system almost reaches a stable state - it's scrapped and it starts all over again.

WPF -> UWP -> WinUI -> WinUI 2 -> WinUI 3 is just such a ridiculous chain. WPF was awesome, highly extensible, and could have easily and modularly been extended indefinitely - while also maintaining its widespread (if unofficial) cross platform support and just general rock solid performance/stability. Instead it's the above pattern over and over and over.

And now it seems WinUI 3 is also dead, alas without even bothering with a replacement. Or maybe that's XAMARIN, wait I mean MAUI? Not entirely joking - I never bothered to follow that seemingly completely parallel system doing pretty much the same things. On the bright side this got me to finally migrate away from Microsoft UI solutions, which has made my life much more pleasant since!


I'd have bought into MAUI if there was Linux support in the box.

I'd say the inertia is far more social than codebase size related. Right now whilst there are pockets of interest there is no broader reason to switch. Bevy as the leading contender isn't going to magic it's way to being capable of shipping AAA titles unless a studio actually adopts it. I don't think it's actually shipped a commercially successful indie game yet.

Also game engines emphatically don't have to be huge. Look at Balatro shipping on Love2d.


> Also game engines emphatically don't have to be huge. Look at Balatro shipping on Love2d.

Balatro convinced me that Love2D might be a good contender for my next small 2D game release. I had no idea you could integrate Steamworks or 2D shaders that looked that good into Love2D. And it seems to be very cross-platform, since Balatro released on pretty much every platform on day 1 (with some porting help from a third party developer it seems like).

And since it's Lua based, I should be able to port a slightly simpler version of the game over to the Playdate console.

I'm also considering Godot, though.


There’s a pretty big difference between the Playdate and anything else in performance but also in requirements for assets. So much so I hope your idea is scoped accordingly. But yeah Love2d is great.

It is. I've already half ported one of my games to the Playdate (and own one), I'm pretty aware of its capabilities.

The assets are what I struggle with most. 1-bit graphics that look halfway decent are a challenge for me. In my half-ported game, I just draw the tiles programatically, like I did in the Pico-8 version (and they don't look anywhere near as good as a lot of Playdate games, so I need to someday sit down and try to get some better art in it).


There are a few successful games like Tunnet [1] written in Bevy.

[1]: https://store.steampowered.com/app/2286390/Tunnet/


Looks cool and well received but at ~300ish reviews hardly a shining beacon if we extrapolate sales from that. But I'll say that's a good start.

Speaking as a Godot supporter, I don't think sales numbers of shipped games are relevant to anyone except the game's developer.

When evaluating a newer technology, the key question is: are there any major non-obvious roadblocks? A finished game (with presumably decent performance) tells you that if there are problems, they're solvable. That's the data.


Game engines are tools not fan clubs. It’s reasonable to judge them on their performance for which they are designed. As someone who cares about the commercial viability of their technology choices this is a small but positive signal.

What it tells me is someone shipped something and it wasn’t awful. Props to them!


> A finished game (with presumably decent performance) tells you that if there are problems, they're solvable.

It doesn't tell you anything about velocity, which is by far the most important metric for indie devs.

After all, the studio could have expended (maybe) twice as much effort to get a result.


Or maybe Rust allowed them to develop twice as fast. Who knows? We're going by data here, and this data point shows that games can be made in Bevy. No more and no less.

Agreed. We've learned a lot from Godot, by the way. I consider all us open source engines to be in it together :)

So far I am way less productive in rust than in any language I've ever used for actual work, so to rewrite an entire game engine would seem like commercial suicide.

"so far" is doing a lot of heavy lifting there =)

I was the same the first two times I tried to use rust (earnestly). However, one day it just "clicked" and my productivity exceeds that of almost anything else, for the specific type of work I'm doing (scientific computation)


I think we shouldn't expect any language to lead different programmers to the same experiences. Rust has the inital steep learning curve, and after that it's a matter of taste whether one is willing to forge on and turn it into a honed tool. Also, I think it's clear that Rust excels in some fields far more naturally than in others. Making blanket statements about how Rust, or any language, is (un)productive is a disservice to everyone.

Yes, the Google folks are also funding efforts to improve Rust/C++ interop, per https://security.googleblog.com/2024/02/improving-interopera...

Thanks for the link. This one was also posted awhile back in a rust comment and when I first read it, I thought Google had used Rust in the V8 sandbox, but re-reading it seems that the article uses Rust as an ‘example’ of a memory safe language but does not explicitly say that it uses Rust. Maybe someone with more knowledge can confirm that Rust was (or was not) used in the V8 Google Chrome sandbox example….

https://v8.dev/blog/sandbox


Rust is not used in V8, to my knowledge.

That description of problems bodes well for Zig

Theoretically accessible describes the experience of trying to use D3D from C very well!

Was trying to use it with some kind of gcc for windows. The C++ part was still lacking some required features, so it was advised to use D3D from C instead C++. There were some helper macros, but overall I was glad when Microsoft started to release their Express (and later Community) Editions of Visual Studio.


I access D3D(11) from C in my libraries and tbh it's not any different from C++ in terms of usability (only difference is that the "this" argument and vtable indirection is implicit in C++, but that's just syntax sugar that can be wrapped in a macro in C).

not true anymore, c11 and c17 are either supported or coming

https://devblogs.microsoft.com/cppblog/c11-and-c17-standard-...


Not really relevant to 30 years ago though.

I worked on many of Activision's games 1995-2000 and C++ was the overwhelming choice of programming language for PC games. C was more common for console. In 1996 the quality of MSFT IDE/ Compiler, plus the CPUs available at the time was such that it could take an hour to compile a big game. By 1998 it was a few minutes. As I recall I think MSFT purchased another companies compiler and that really changed Visual Studio.

I was a developer on the Microsoft C++ compiler team from 1991 to 2006. We definitely didn't purchase someone else's compiler in that time. We looked at the EDG front end at various times but never moved over to it while I was there.

Perhaps the speed-up you remember had something to do with the switch-over from 16 bits to 32, which would have been the early to mid 90s. Or you're thinking of Microsoft's C compiler starting from Lattice C, back in the 80s before my time. There was also a lot of work done on pre-compiled headers to speed compilation in the latter half of the 90s (including some that I was responsible for).


I heard that early versions of C++ IntelliSense from Visual Studio used Edison Design Group's (EDG) front end. Is that true? No trolling here -- honest question. If yes, are they still using it now?

Not true by the time I retired in 2007, but I've got a vague memory of talking to someone on the C++ front-end team some time after that and EDG for IntelliSense being mentioned. So no idea if that's really true or not, and if so, whether that's true today.

I was heavily involved in the first version of C++ IntelliSense, roughly 1997?, and it was all home-grown. It was also a miracle it worked at all. I've blocked out most of the ugly details from my memory, but parsing on the fly with a fast enough response time to be useful in the face of incomplete information about which #if branches to take and, especially, template definitions was a tower of heuristics and hacks that barely held together. Things are much better nowadays with more horsepower available to replace those heuristics.


I was a teenager at that point. I learnt C in the early 90s and C++ after 96 IIRC. Didn’t start professionally in games until 2004 though!

> and didn't become popular for gamedev until the turn of the millenium

Wasn't this also because Microsoft had terrible support for C?

Since the mid-90's, a number of gamedevs moved to C++ but were unhappy with the results.. how OOP works, exception handling, the STL, etc.

My understanding is.. by late 90's.. many game developers, despite using C++, we still coding more inline with C programming than (proper) C++.

Mostly C code but using some features of C++ like, functions inside a struct, or using namespaces, that did not sacrifice compilation and runtime speed.


We wrote this in C++ (and assembler), but used only the most obvious language features. We laid down the first code in '95 or '96:

https://www.youtube.com/watch?v=9UOYps_3eM0


Yeah, gaming industry has become mature enough to build up its own inertia so it will take some time for new technologies to take off. C# has become a mainstream gamedev language thanks to Unity, but this also took more than a decade.

Comparing the time it takes for a prog language to spread from the 80s to today is a bad vantage point. Stuff took much longer to bake back then -- but even so the point is moot, as other commentors pointed out, it took off roughly the same amount of time between 2015 and today.

Hmm I don't agree. We're far away from the frantic hardware and software progress in the 80s and 90s. Especially in software development it feels like we're running in circles (but very, very fast!) since the early 2000's, and things that took just a few months or at most 2..3 years in the 80s or 90s to mature take a decade or more now.

The concept of AAA games didn't even exist back in 1985, very few people were developing games at that era, and even fewer were writing "complex" games that would need C++.

The SNES came on 1990 and even then it had it's own architecture and most games were written in pure assembly. The PlayStation had a MIPS CPU and was one of the first to popularize 3D graphics, the biggest complexity leap.

I believe your are seeing causation were only correlation should be given. C++ and more complex OOP languages just joined the scene when the games themselves became complex, because of hardware and market natural evolution


Many tried c++ in early 90s, but wasnt it too slow/memory intensive? You had to implement lots of inline c/assembly to have a bit of performance. Nowadays everything is heavily optimized, but back then not.

If you’re referring to game dev specifically, there have been (and continue to be) concerns around the weight of C++ exception handling, which is deeply-embedded in the STL. This proliferated in libraries like the EASTL. C++ itself however is intended to have as many zero-cost abstractions as possible/reasonable.

The cost of exception handling is less of a concern these days though.


Exception handling is easy enough to disable. Luckily, or C would probably still be the game developers go to.

Seems like a few contradictory ideas here. Rust is supposed to be a better safer C/C++.

Then lot of comments here that games are best done in C++.

So why can't Rust be used for games?

What is really missing beyond an improved ecosystem of tools. All also built on Rust.


> I'd expected some AAA title to be written in Rust by now.

Why? Those kinds of game engines are enormous amounts of code, and there's little incentive to rewrite.

I do strongly disagree that we aren't ever going to see large-scale game development in Rust; it just takes time. Whether games adopt an engine is largely about that engine's maturity rather than anything about the language. Bevy is quite young; 0.13 doesn't even have support for animation blending yet (I landed that for 0.14).


It was a few years back that the question came up to the developers of a Call of Duty title. "Is there still code from Quake 3 in COD?". They dodge around it by saying something like "we cannot deny this but e use the most appropriate tech where needed".

While not confirmation, I wouldn't be surprised if there is a few nuggets of Q3 in that code base still doing some of the basics. That would be really cool if it is true.

It seems like unless you are someone like John Carmack or most of Nintendo, game dev tools are about what can get the best results quickest rather than any sort of technical specifics. It is a business after all.


A neat real-world example of ancient Quake code surviving to this day is visible in Valves games - the hardcoded patterns for flickering lights in Quake 1 survived into GoldSrc and then into Source and then into Source 2, most recently showing up in Half Life Alyx, 24 years on from their original appearance in Quake 1.

https://www.alanzucconi.com/2021/06/15/valve-flickering-ligh...

Basically all of the bigger systems will have been Ship-of-Theseus'd several times over by now, but little things like that can slip through the cracks.


That light flickering is quite cool, thanks for sharing. It reminds me of the Wilhelm scream, but on a much smaller scale of course.

> game dev tools are about what can get the best results quickest rather than any sort of technical specifics. It is a business after all.

Bingo. Rust's biggest strength is correctness. But games aren't mission critical, and gamers are very tolerant towards bugs (maybe not on social media, but very few buggy games have had their sales impacted). Your biggest sale to AAA game devs are to engine programmers to minimize tech debt. But as we are seeing with the current industry, that's not exactly something companies care about until it's too late.

Then on the indie level we get articles like this. Half the article ultimately came down to "it's faster to break things and iterate than to do it right once". Again, similar lack of need for bug-free games. In addition, few indie games are scoped to a point where they need a highly disciplined ECS solution to scale with.

The author even criticizes the "tech specs" community part of rust gamedev. Different tools, diferent goals, different needs. IMO, I think Rust will help make for some very robust renderers one day, but ultimaely the scripting will be done on another language. Similar to how Unity uses C# scripting to a C++ engine, that they IL2CPP to bring back to a full C++ game.


This, exactly. As an embedded turned Unreal developer the first impression I had while using Unreal is how little concern for correctness there is overall. UB is used liberally, and there's clearly a larger focus on development speed and ease off use compared to safety and correctness. If a game has integer overflow or buffer overflows nobody cares. Viceversa, you need to keep the whole thing usable enough for the various 3D artists and such who have a hard time understanding advanced programming.

If that's the question... Let me assure you that there are decades-old pieces of code inside of, and used to assemble, many modern AAA games coming out of mature studios. The systems and tooling is typically carried forward. I don't think this is some big secret and you've intuited exactly the reason why:

> game dev tools are about what can get the best results quickest rather than any sort of technical specifics. It is a business after all.


Not surprised at all that this stuff sticks around. I find it very endearing actually. Ain't broke, don't fix it!

A lot of big projects have amazing longevity to their older architectural decisions. Unreal still has a lot of stuff in it people that used UE1 would recognize, I did most of my professional development on UE3 and a bunch of that is still pretty recognizable. Similarly Chrome is a product of the time it was first created. And looking into the Windows source is probably like staring into the stygian abyss.

There is a lot of legacy and tech debt out there!


I remember years back someone form Microsoft calling the windows code base "The Abyss" because of how much technical legacy there was in it.

I think it was Steve Gibson who said that the Windows code base had some very questionable things in it. For instance they had work experience high school students working on code that made it into the final build that was less than spectacular. Like how Windows used to stall when you put a CD in and wouldn't proceed until the disc spun up and started reading data.

Windows 11 probably would still do that but I don't know because I don't have a disc drive any more.


It wasn't really windows lagging, it was explorer. There used to be more things in explorer that were blocked on something ultimately blocked by I/O.

This tends to not be the case so much any more, so I doubt it would happen today.

Instead you get the dreaded "Working on it....". It seem's like hard drives can be just as slow to spin up these days as CDs were back in the day.


Damn I forgot about explorer hanging when you put a CD in. That was especially terrible when you didn't have DMA

"I tend to agree about the "async contamination" problem. The "async" system is optimized for someone who needs to run a very large web server, with a huge number of clients sending in requests. I've been pushing back against it creeping into areas that don't really need it."

100% this. As I say elsewhere in these threads: Rust is the language that Tokio ate. It isn't even just async viral-chain-effect, it's that on the whole crates for one async runtime are not even compatible with those of another, and so it's all really just about tokio.

Which sucks, if you're doing, y'know, systems programming or embedded (or games). Because tokio has no business in those domains.


It does in my domain of systems programming with async data handling. Tokio works like a dream - slipping into the background and just working so I can concentrate on the business logic.

I know this is a late reply to your post, but your wording prompted a question. I will preface by saying this is not some sort of semantic flamebait, it is also not supposed to be a gatekeeping exercise. You state your domain is systems programming, but then talk about the event loop and scheduler for your program as ancillary details and say that your concentration is on business logic. I tend to view systems programming as development of things that have no business logic, because that is the domain of application programming. Also, I tend to think that a defining feature of systems programming is development that can not just accept a default solution to something as impactful as an event loop/scheduler/executer, but have to focus deeply on those aspects of a program that are the crux of its actual computational operation and interactions between those parts.

In the context of games, the systems programming is the renderer, audio engine, physics calculations, and things like a task system and dispatcher/scheduler, etc. As compared to the actual application specifics of levels, art, dialogue, interactions, UI, etc which to me are not systems programming.

With that said, how do you define systems programming? I’m really interested in how various devs tend to view the ‘cut-off’ between systems and application development. Sometimes I’m pretty sure I am on the extreme end of disjointness of the two and non-accepting of any ‘business logic’ type development qualifying as systems programming.

TL;DR - What is your definition of systems programming and do you include things like ‘business logic’ within that definition?


Even within what you discuss, things like renderers, audio engines and physics calculations have business logic, which I interpret as being the logic pertinent to their specific tasks, as opposed to support logic. Clearly these sorts of terms are heavily overloaded, so please don't too hung up the precise term I used.

That said, I think the view of systems programming is more relevant. My understanding is essentially the same as Wikipedia: "systems programming aims to produce software and software platforms which provide services to other software, are performance constrained, or both". I don't see business logic excluded from that definition.

For context, the area I use it is in direct interaction with an FPGA in the middle layer of a bigger system. The software acts as a performance critical controller of the FPGA and data marshalling system, controlling the DMAs and shunting the data into the network subsystem. Another bit of the system on different hardware then receives the data and does some performance critical signal processing before passing the result to the application layer. The "systems programming" stuff is responsible for translating high level application API commands into low level FPGA control and low level FPGA data and feedback into high level application structures.

Async works really well on the data handling. I have a full back pressure chain from the application, across the network, across the DMA subsystem right down to the FPGA. It also allows careful pinning of different tasks to different cores with pinned runtimes, which is important in maximising the network throughout on the resource limited cpu cores.

Rust async is great for this kind of stuff. I read a post a while ago, which I annoyingly can't find anymore, in which the author was using custom reactor and executor to hide cache latency. It was really beautiful and incredibly simple and annoyingly forgotten by me (!).


Disappointing to hear this after battling the same nonsense in JS for years.

It's just endemic to the industry. Framework-itis

Rust is a language made and used by Dunning-Kruger people who violently react to having to learn the prior art.

What did you really expect?


Rust's async/await design makes a lot of sense when you consider its primary goals (C interop, low level control, zero cost abstractions, etc.). Sure, perhaps most of us should be using a language with different constraints as opposed to Rust.

> The "async" system is optimized for someone who needs to run a very large web server,

Even there it's very problematic at scale unless you know what you're doing. async/await isn't zero cost, regardless of what people will tell you.


Absolutely. Async/await typically improves headroom (scalability) at the cost of latency and throughput. It may also make code easier to reason about.

I disagree with this, you're probably not paying much (if at all) in latency or throughput for better scaling.

What you're paying for with async/await is a state machine that describes the concurrent task, but that state machine can be incredibly wasteful in size due to the design of futures and the desugaring pass that converts async/await into the state machine.

That's why I said it's not "zero cost" in the loosest definition of the phrase - you can write a better implementation by hand.


That is true. Rust's async/await desugaring is still missing optimizations. I think that will be ironed out eventually. What mainly concerns me about async/await is that, even with Rust's best efforts, the baseline complexity will probably always be somewhat higher than for sync code. I will be pleased if the gap is minimized and people only need to reach for async when they want to. Right now, the latter isn't the case because of the "virality [of] function coloring".

Definitely makes code harder to reason about.

If you were to write the same code without using async you'd be trudging through a mess of callbacks and combinators. This is what writing futures code before 2018 was like. It was doable if you needed the perf but it sucked. Async is a huge improvement to readability and reasoning that we didn't have before.

No, actually that was just javascript. Programming environments with threading models don't have to live that way. Separate threads can communicate through channels and do quite well for themselves. See how it works is, you do something like let data = file.read(); and the it just sits there on that line until the read is done and then your data has the actual bytes in it and you just use them and go on with your life.

> you do something like let data = file.read(); and the it just sits there on that line until the read is done and then your data has the actual bytes in it and you just use them and go on with your life.

That's exactly how async/await works, except that it translates to state machines under the hood which gives you great performance. No need to mess with threading models, at all.


Yeah, Rust's async/await and lightweight threads are functionally very similar. Function coloring is a problem with async/awaitt, though (for now?).

Until you need cancellation

One rarely really needs that.

Maybe you are both right but your scales are orders of magnitude apart.

> at the cost of latency and throughput.

Compared to what?

Doing epoll manually?


A reactor has to move the pending task to some type of work queue. The task has to pulled off the work queue. The work queue is oblivious as to the priority of your tasks. Tasks aren't as expensive as context switching, but they aren't free either: e.g. likely to ruin CPU caches. Less code is fewer instructions is less time.

If you care enough, you generally should be able to outdo the reactor and state machines. Whether you should care enough is debatable.


The cache thing is a thing I think a lot of people with a more... naive... understanding of machine architecture don't clue into.

Even just synchronizing on an atomic can thrash branch prediction and L1 caches both, let alone working your way through a task queue and interrupting program flow to do so.


So yeah, you're thinking about the comparison between async/await and manual state machines management with epoll. But that's not what most people have in mind when you're saying async/await have performance impact, most of them would immediately think you're talking about the difference with threads.

If I'm not doing slow blocking I/O, I'm not doing epoll anyways.

But the moment somebody drops async into my codebase, yay, now I get to pay the cost.


Either you are doing slow IO (in some of your dependency) or you don't have anyone dropping async in your code though…

Threading, probably.

Async/await isn't related to threading (although many users and implementations confuse them); it's a way of transforming a function into a suspendable state machine.

Games need async/await for two main reasons:

- coding multi-frame logic in a straightforward way, which is when transforming a function into a suspendable state machine makes sense

- using more cores because you're CPU-bound, which is literally multithreading

Both cases can be covered by other approaches, though:

- submitting multi-frame logic as job parameters to a separate system (e.g., tweening)

- using data parallelism for CPU-intensive work


I know. But threading, and earlier processes, were less scalable but potentially faster ways of handling concurrent requests.

It's also much easier to reason about, since scheduling is no longer your problem and you can just write sequential code.

That's one way to see it. But the symmetric view is equally valid: async await is easier to reason about because you see were the block points are instead of having to guess which function is blocking or not.

In any case you aren't writing sequential code, it's still concurrent code, and there's a trade-off between the writing simplicity of writing it as if it was sequential code, and the reading simplicity of having things written down explicitly.

This “write-time vs read-time” trade of is everywhere in programming BTW, that's also the difference between error-as-return-values and exception, or between dynamic typing and static one for instance.


I don't think so, because there isn't a performance drawback compared to threads when using async. In fact there's literally nothing preventing you from using a thread per task as your future runtime and just blocking on `.await` (and implementing something like that is a common introduction to how async executors run under the hood so it's not particularly convoluted).

Sure there's no reason to do that, because non-blocking syscalls are just better, but you can…


Threading is compatible with async

"threading alone" as in a thread per request.

> I'd expected some AAA title to be written in Rust by now. That hasn't happened, and it's probably not going to happen, for the reasons the author gives.

The main reason is that you can't ship that Rust code on PS5 in a sensible manner. People have tried, got useless toys to compile, but in the end even Embark gave up. I remember seeing something from them that they had moved Rust to server-only.


> The main reason is that you can't ship that Rust code on PS5 in a sensible manner.

Really - why’s that?


Sony requires that you use their tooling, which you can only get under NDA.

If there was significant pressure from developers Sony would allow Rust. I doubt there is any.

It's a catch 22 - you can't deploy Rust so no one uses Rust for anything, no one uses Rust for anything so there is no reason for Sony to work on Rust deployment.

I think it would be a really good fit for certain parts of the engine - serialization code especially. We have massively complicated C++ code parsing network packets and all sorts of similar sketchy things, always scares me when I see it.


Really a shame that there's that sort of thing going on in 2024 too.

I remember a meeting of local gamedevs with sony in '95. A guy at the back piped up with "So when will the C++ compiler be ready? We've written our whole game in C++".

Crickets. Two Sony dudes at the front look at each other like, "You tell him".

IMHO Rust is the wrong language for game development. But so is C++ TBH.


> I tend to agree about the "async contamination" problem.

Argh I have the same issue. Sure if you write JS or Python you probably need async. My current Java back end that has like 5 concurrent users does not need async everything making 10x the complexity.


> I'd expected some AAA titles to be written in Rust by now.

"AAA" titles are huge and/or high dev budgets. Even if a game is "starting from scratch" the engine development team are still likely taking code from previous projects to get started. Of course there are other factors. It could be a BIG RISK to move to another programming language when the team, despite frustrations, are already familiar with something else... like the perks C++ brings (you learn from trial-and-error)

Could you imagine learning Rust as-you-go... building a AAA title... and fighting the compiler? To me it is a huge risk!

That is my opinion.. but I am sure others will disagree. If there is anyone on (or did) a AAA title with Rust... I would be happy to hear more about it.

I am not saying it will never happen. Maybe a AAA title is currently in development in Rust. I honestly dont know. However, game developers... if they are looking into Rust... are also looking at Odin, Jai, or Zig. For gaming, I think they are better alternatives than Rust but (again) that is my opinion.

Now for smaller, indie games - the possibility of moving to Rust (or another language) is more likely. Likely a fair percentage have moved away from C++ now.


> * There are very few people doing serious 3D game work in Rust. There's Veloren, and my stuff, and maybe a few others. No big, popular titles. I'd expected some AAA title to be written in Rust by now. That hasn't happened, and it's probably not going to happen, for the reasons the author gives.

At one point the studio behind the Finals was writing game server code in Rust with an Unreal engine client. Not sure if that's true still


The studio you're talking about is Embark studios, and is openly pretty big on Rust [1] I think it was rumored that their next project will use a Rust game engine, but I am not sure how it's going now.

[1] https://github.com/EmbarkStudios/rust-ecosystem


Their creative sandbox project is full Rust from client to server I believe. I haven't kept up with it after trying the closed alpha a while ago but it looks like it's still going, and has a name now: https://wim.live

It's still only listed as coming to PC, Mac, Linux and Android so I guess they haven't broken through the barrier of shipping Rust on consoles.


Backend 3d code?

I'm not familiar with the domain, but wouldn't 3D collision checking be considered backend 3D code? Even if it's not rendered, it still needs to be calculated.

Server side rendering for games.

That's a thing?

Absolutely! Any sort of multiplayer game needs a source of authority if you want to prevent cheats like a hacked client lying about its position, and a really good way to do that is load the geometry of your level and run physics checks server side at a lower frequency than once per frame. Godot and Unity both support headless builds for exactly this reason, it's basically the whole game engine, minus the renderer, audio, and UI systems, usually.

That is not server side rendering. Per your own comment:

> minus the renderer

(Otherwise you are completely correct.)

Closest I can think of is server side ragdolls that are rendered the same on all screens and similar stuff.


Yep, Stadia might have failed, but GeForce Now and XBox Cloud Gaming have enough customers to keep them going.

That’s complete different. They are rendering the client and streaming it to users. That doesn’t make the client side code “server side” any more than you streaming Fortnite on Twitch does.

Nope, XBox XDK has facilities for code to be aware of rendering server side.

> The "async" system is optimized for someone who needs to run a very large web server, with a huge number of clients sending in requests.

Can you please elaborate on this? I see a lot of similar concerns in other contexts too. Linux kernel's scheduler for example. Is it a throughput/latency tradeoff?


The current popularity of the async stuff has its roots in the classic "c10k" problem. (https://en.wikipedia.org/wiki/C10k_problem)

A perception among some that threads are expensive, especially when "wasted" on blocking I/O. And that using them in that domain "won't scale."

Putting aside that not all of use are building web applications (heterodox here in HN, I know)...

Most people in the real world with real applications will not hit the limits of what is possible and efficient and totally fine with thread-based architectures.

Plus the kernel has gotten more efficient with threads over the years.

Plus hardware has gotten way better, and better at handling concurrent access.

Plus async involves other trade-offs -- running a state machine behind the scenes that's doing the kinds of context switching the kernel & hardware already potentially does for threads, but in user space. If you ever pull up a debugger and step through an async Rust/tokio codebase, you'll get a good sense for what the overhead here we're talking about is.

That overhead is fine if you're sitting there blocking on your database server, or some HTTP socket, or some filesystem.

It's ... probably... not what you want if you're building a game or an operating system or an embedded device of some kind.

An additional problem with async in Rust right now is that it involves bringing in an async runtime, and giving it control over execution of async functions... but various things like thread spawning, channels, async locks, etc. are not standardized, and are specific per runtime. Which in the real world is always tokio.

So some piece of code you bring in in a crate, uses async, now you're having to fire up a tokio runtime. Even though you were potentially not building something that has anything to do with the kinds of things that tokio is targeted for ("scalable" network services.)

So even if you find an async runtime that's optimized in some other domain, etc (like glommio or smol or whatever) -- you're unlikely to even be able to use it with whatever famous upstream crate you want, which will have explicit dependencies into tokio.


> If you ever pull up a debugger and step through an async Rust/tokio codebase, you'll get a good sense for what the overhead here we're talking about is.

So I didn't quite do that, but the overhead was interesting to me anyway, and as I was unable to find existing benchmarks (surely they exist?), I instructed computer to create one for me: https://github.com/eras/RustTokioBenchmark

On this wee laptop the numbers are 532 vs 6381 cpu cycles when sending a message (one way) from one async thread to another (tokio) or one kernel thread to another (std::mpsc), when limited to one CPU. (It's limited to one CPU as rdtscp numbers are not comparable between different CPUs; I suppose pinning both threads to their own CPUs and actually measuring end-to-end delay would solve that, but this is what I have now.)

So this was eye-opening to me, as I expected tokio to be even faster! But still, it's 10x as fast as the thread-based method.. Straight up callback would still be a lot faster, of course, but it will affect the way you structure your code.

Improvements to methodology accepted via pull requests :).


I'd want to see perf stats on branch prediction misses and L1 cache evictions alongside that though. CPU cycles on their own aren't enough.

It doesn't seem my perf provides metric for L1 cache evictions (per perf list).

Here's the results for 100000 rounds for taskset 1 perf record -F10000 -e branch-misses -e cache-misses -e cache-references target/release/RustTokioBenchmark (a)sync; perf report --stat though:

async

    Task 2 min roundtrip time: 532
    [ perf record: Woken up 1 times to write data ]
    [ perf record: Captured and wrote 0,033 MB perf.data (117 samples) ]

    ...    
    branch-misses stats:
              SAMPLE events:         54
    cache-misses stats:
              SAMPLE events:         27
    cache-references stats:
              SAMPLE events:         36
sync

    Thread 2 min roundtrip time: 7096
    [ perf record: Woken up 5584 times to write data ]
    [ perf record: Captured and wrote 0,367 MB perf.data (7418 samples) ]

    ...
    branch-misses stats:
              SAMPLE events:       6577
    cache-misses stats:
              SAMPLE events:        159
    cache-references stats:
              SAMPLE events:        682

Interesting. Thing is all you're benchmarking is the cost of sending a message on tokio's channels vs mpsc's channels.

It would be interesting to compare with crossbeam as well.

But not sure this reflects anything like a real application workflow. In some ways this is the worst possible performance scenario, just two threads spinning and spinning at the fastest speed they can, dumping messages into a channel and pulling them out? It's a benchmark of the channels themselves and whatever locking/synchronization stuff they use.

It's a benchmark of a "shared concurrent data" situation, with constant synchronization. What would be more interesting is to have longer running jobs doing some task inside themselves and only periodically (ever few seconds, say) synchronizing.

What's the tokio executor's settings by default there? Multithreaded or not? I'd be curious how e.g. whether tokio is actually using multiple threads or not here.


Actually I wasn't that interested in throughput, only the latency in terms of instructions executed since sending until it is received, though indeed the throughput is also superior with tokio.

For most applications this difference doesn't really matter, but maybe some applications do a lot of small things where it does matter? In those cases it might be an easy solution to switch from standard threads to tokio async and gain 10x speed, as the structure of the applications remains the same.

> It's a benchmark of the channels themselves and whatever locking/synchronization stuff they use.

Yeah, in retrospect some mutex-benchmark might be better, though I don't expect a message channel implemented on top of that is noticeably slower. A mutex benchmark is probably easier to get wrong..

> What would be more interesting is to have longer running jobs doing some task inside themselves and only periodically (ever few seconds, say) synchronizing.

I don't quite see how this would give any different results. Of course, in that case the time it takes to transmit the message would be completely meaningless.

> What's the tokio executor's settings by default there? Multithreaded or not? I'd be curious how e.g. whether tokio is actually using multiple threads or not here.

It's using the multithreaded executor. I tried the benchmark with #[tokio::main(worker_threads = 1)] and 2 and while with =1 the result was 529 but with =2 it was 566.


> Putting aside that not all of use are building web applications

Perfect moment to mention "rouille" which is a very lightweight synchronous web server framework. So even when you decide to build some web application you do not necessarily have to go down the tokio/async route. I have been using it for a while at work and for private projects and it turned out to be pretty eye-opening.


>now you're having to fire up a tokio runtime

I've been developing in (mostly async) Rust professionally for a about a year -- I haven't written much sync rust other than my learning projects and a raytracer I'm working on, but what are the kind of common dependencies that pose this problem? Like wanting to use reqwest or things like that?


> Like wanting to use reqwest or things like that?

Yes. Reqwest cranks up Tokio. The amount of stuff it does for a single web request is rather large. It cranks up a thread pool, does the request, and if there's nothing else going on, shuts down the thread pool after a while. That whole reqwest/hyper/tokio stack is intended to "scale", and it's massive overkill for something that's not making large numbers of requests.

There's "ureq", if you don't want Tokio client side. Does blocking HTTP/HTTPS requests. Will set up a reusable connection pool if you want one.


reqwest also has a blocking version, which I use in projects not already using an async rt

https://docs.rs/reqwest/latest/reqwest/blocking/index.html


The blocking implementation still depends on and uses tokio, last I looked.

I've seen this with multiple Rust packages. "Yes, we offer a synchronous blocking version..." and then you look and it's calling rt.block_on behind the scenes.

Which is a pretty large facepalm IMHO


You don't have to do that, Tokio also provides a single-threaded runtime that just runs async tasks on the main thread.

I'm happy to see someone still doing some work in second life.

There's a lot going on. Someone is doing a new third party viewer, Crystal Frost, in Unity. Linden Lab has a mobile viewer in alpha test. Rendering is PBR now for new objects. There are mirrors! Content upload is moving to glTF, to be compatible with everybody else. Voice is switching from Vivox to WebRTC. Game controller support is in test. New users get better avatars. The dev staff is larger.

None of this is yet increasing Second Life usership much, but it remains the best metaverse around.

I thought the metaverse thing was going to be bigger. Meta spent so much money to produce so little.


> There's a lot going on.

I'd like to use the opportunity to ask: What happened during the covid pandemic? I haven't heard/read anything about second life during the pandemic even though this was probably a once-in-a-lifetime opportunity?

Are there any news sources that you can recommend to keep an eye on second life, because it doesn't seem that it gets that much press coverage?


> What happened during the COVID pandemic?

Usage went up about 10%, and then leveled off. Logged in right now, at 0020 PDT: 32084 users. Varies between 30,000 and 50,000 around the clock.

> News sources

* https://modemworld.me/

* https://ryanschultz.com/


As a game developer for about two decades, I've never considered Rust to be a good programming language choice.

My priorities are reasonable performances and the fastest iteration time possible.

Gameplay code should be flexible, we have tons and tons of edge cases _by design_ because this is the best way to create interesting games.

Compilation time is very important, but also a flexible enough programming structure, moving things around and changing your mind about the most desirable approach several times a day is common during heavy development phases.

We almost never have specifications, almost nothing is set until the game is done.

It is a different story for game engines, renderers, physics, audio, asset loaders etc. those are much closer to system programming but this is also not where we usually spend the most time, as a professional you're supposed to either use off-the-shelf engines or already made frameworks and libraries.

Also, ECS is, IMHO, a useful pattern for some systems, but it is a pain in the butt to use with gameplay or UI code.


> It is a different story for game engines, renderers, physics, audio, asset loaders etc. those are much closer to system programming but this is also not where we usually spend the most time, as a professional you're supposed to either use off-the-shelf engines or already made frameworks and libraries.

But this is where industry interest (the little there is) lies for Rust, is it not? This is what the AAA studios that are researching and prototyping are working on.

C++ is not a popular language to implement the actual game in for all the reasons you list. It is too slow to compile and too rigid. The people who actually build the games, make them tick, are all working in visual scripting languages.


Visual scripting languages are easy to use and practical for low-complexity code, but they scale very poorly once the complexity increases.

Gameplay code is still better written with code, C# or C++ or sometimes Lua.


> visual scripting languages

I'm surprised no one has made such a language that is designed from the ground up to be used as such for rust. Nim/coffeescript come to mind, but they target non-rust languages. Lua would be close enough if it weren't so alien to everything people like about rust.


Someone did actually create a scripting language specifically designed to work with rust: https://rhai.rs/

As a non-game dev who uses Rust and Elixir, Rust wouldn't be my first pick for a large gamedev studio for multiple reasons. As for alternatives worth evaluating: Crystal, Cython (compiled Python), or Nim could result in increased gamedev productivity over C++ or C#. Maybe even Go because the iteration and compile times are very fast, and the learning curve is very low.

Often in the past Lua has been used and in my experience it's been quite nice. It's very easy to bind, there's some nice editors out there and the performance is decent.

There's some other game-specific scripting languages that have popped up (angelscript and wren come to mind but there's more). I've not used them in full production products though. Mostly just kicked the tires.

Now that I think about it though, it's been almost 6 years since I've worked on an engine with lua support. Mainly because in the last few years I've been working with unity or unreal.


| Go because the iteration and compile times are very fast

Safety is important and for certain applications, Rust is unrivaled.

But for games, like web apps, where time to market and innovation can be just as if not more important than being free of runtime errors, Go is more suited to rapid development than Rust on compile times alone.

Of course, the libraries and support for both aren't quite there yet, so at this point neither is well suited to game dev.


I agree. We almost have a paradox of choice nowadays because it's easier than ever to create new language platforms. Rust is something different because its thesis is safety and performance by default, more or less optimized for systems development primarily, but at the bargain of making dangerous things more complicated to accomplish somewhat intentionally. Unconventional languages are sometimes used as a conspicuous challenge to attract developers or to attempt to move some parts of an industry into new territory.

> Cython (compiled Python), or Nim could result in increased gamedev productivity over C++ or C#

If you're starting from scratch, then maybe. Having had to crash learn games dev (ex VFX systems person) Unity + c# is just so nice to use. most of the easiness of python, but with proper strict typing. (which you can turn off, if you want)

plus the wealth of documentation, its great. I imagine unreal is quite good in that regard too.


>As for alternatives worth evaluating: Crystal, Cython (compiled Python), or Nim could result in increased gamedev productivity over C++ or C#.

I read on a recent HN thread that Crystal compilation is slow due to its type inference, IIRC.


Does Crystal support Hot Reloading? The slow compilation speed is a non-starter for me.

Gamedev industry already settled on almost perfect language for this task (C#) so there is little profit in trying to reinvent the wheel.

And by perfect I mean not the way Unity uses it but the way pure C# engines use it.


They have an interpreter mode now that is quite good and should be well-suited for these situations

Crystal doesn't support parallelism[1], which is a dealbreaker in this context (and for performance sensitive programs in general).

[1]=production grade; additionally, it seems that no work has been done on it for years.


Haha. Nope. Maybe Nim, V*, Go*, or Elixir would be a better choice for such a use-case.

* So fast, they really don't need HCR.


HCR provides changing things while th game is running in it's current state. fast recompile does not.

Start game, wait for engine to initialize, select level, wait for it to load, move player or camera to desired location. Now iterate on something at that location via HCR. If you have to recompile and restart the game you're not going to have fast iteration



I haven't tried it yet but I've wondered if Elixir might be a good choice for a game server with many concurrent players.

Definitely and for chat.

BEAM/HiPE VM allows native linking using NIFs so it's possible to integrate Erlang or Elixir with C-compatible projects for critical code sections, library interfacing, and perhaps even the majority of a performance-critical game engine as native code. Rustler also exists to write NIFs in Rust. Recall how VMware ESXi core tech was implemented mostly as Linux kernel modules and heavily-modified Linux to turn it inside-out as a type-1 hypervisor.


Go is infamous for its gc latency spikes, which is the thing that games cannot tolerate.

Though 1.18 helped a lot, you'd have to do some major persuasion to game devs that Go's gc is the kind of thing they'd want in their game.

---

EDIT: Not sure the downvote, Go is know for its (historically at least) unsuitability for RTC or game dev.


I’ve heard that go has very low latency gc, i haven’t heard of it having spikes

The problem with Go is its inadequate FFI, which is important for gamedev which tends to be FFI and syscall-heavy due to embedding another gamescript language and/or calling into underlying rendering back-end, sometimes interacting with input drivers directly, etc.

Which is why C# has been chosen so often (it has performance not much worse than C++ (you can manually optimize to match it), zero or almost zero-cost FFI, and can also be embedded, albeit with effort).

There are also ways to directly reduce GC frequency by writing less allocation-heavy code, without having to resort to writing your own drop-in GC implementation (which is supported but I haven't seen anyone use that new API just yet aside from a few toy examples, I suppose built-in GC is good enough).


The overhead for Go in benchmarks is insane in contrast to other languages - https://github.com/dyu/ffi-overhead Are there reasons why Go does not copy what Julia does?

Go has non-native stack and has to perform stack switching among other things (hopefully someone with more knowledge than bare minimum required to criticize Go can chime in :D)

p.s.: mono seems to produce quite a bad result vs .net 6/7/8 huh, time to make a PR


Your comment is down voted because it is false. Go is not "infamous" for gc latency spikes.

It is probably true for game engine dev, but not generally true for game dev, which is a vast field and not as computationally demanding as many imagine. I believe Go's unwillingness to be less strict about some (non-type) semantics would be a bigger problem for game devs than GC.

That's true. Go ain't C4 (JVM), ORCA (Pony), HiPE (Erlang/OTP BEAM), or CLR (C#). The JVM and CLR runtimes have been beaten on for years at immense scale in server-side business settings. I wished Go supported embedded work (without a GC), had an alternative allocator a bit more like Erlang's, and had alternative implementations that transpiled to other languages, but it doesn't. Ultimately, I left when zillions of noobs poured in because it was seen as "easy" and started wasting my time rather than searching for answers themselves.

If performance were such a huge concern, I don't see any valid resistance to Rust that completely lacks a GC and makes it easy to call C code other than "it's something different", "there's too much hype", or "I don't like it". Recent development tools like RustRover make is really damn easy to see whats a move value or a borrow, debug test cases, run clippy automatically, and check crates versions in Cargo.toml. Throw Copilot in there and let it generate mostly correct, repetitious code for you.


I had similar thoughts, about Rust being a good match for game engines but not games. Maybe it suggests Rust game engines might want to include an interpreter for some higher level language to actually do the gamedev in.

Rust is pretty good for writing PL interpreters (and similar tooling) too, actually.


I know you're not asking for recommendations, but Lisp, particularly SBCL, really seems to check all your boxes. I say this as someone who generally reaches for Scheme when it comes to Lisps too.

There are a few game engines[0] for CL, but most of them seem to be catered specifically to 2D games.

[0] https://github.com/CodyReichert/awesome-cl?tab=readme-ov-fil...


> a flexible enough programming structure, moving things around and changing your mind about the most desirable approach several times a day is common during heavy development phases.

That's the kind of code for which Rust-like languages shine. Rich type systems make it easy to change your mind about things and make large changes to your code with confidence.

(Whether Rust tooling is actually at a level to take advantage of that is another question)


> That's the kind of code for which Rust-like languages shine. Rich type systems make it easy to change your mind about things and make large changes to your code with confidence.

I don't think this is true. Rust makes it easy to get the refactor right (generally speaking 100% right). But that's not what they're describing. They're describing where the ability to make the refactor fast, even if it doesn't work correctly (in the formal sense of correctly). That is to say, memory leaks and race conditions and all sorts of horrible nastiness may be tolerable during the dev process in exchange for trying out an idea more quickly.

This is, of course, significantly more work at the end to patch up all of the things you did, but if you don't have to do the full work on 99/100 iterations, or got to try out more iterations because of the quick turnaround time, that would be considered a win here.


Pretty much every compiled language with a static typesystem has that "large-scale refactoring support" though. That's not Rust's USP, on the contrary: a too strongly typed language can make refactoring actually harder than it needs be. The sweet spot is somewhere in the middle (where exactly is up for discussion of course).

>Rich type systems make it easy to change your mind about things and make large changes to your code with confidence.

To be fair, they need to be able to make large changes with confidence because what would be small changes in other languages tend to end up being very large changes in rust like languages.


> Also, ECS is, IMHO, a useful pattern for some systems, but it is a pain in the butt to use with gameplay or UI code.

Not a game developer, but each time I tried to make one not using ECS(or something at least similar in spirit) I quickly found myself not being able to proceed due to the sheer mess in the codebase.

How does one normally avoid that?


> Also, ECS is, IMHO, a useful pattern for some systems, but it is a pain in the butt to use with gameplay or UI code.

I'd love to see a language built around ECS. I wonder how nice it can be in a language syntax where ECS is the easiest thing you can do.


> My priorities are reasonable performances and the fastest iteration time possible.

I bought Mount & Blade II Bannerlord in 2020-03-30. I love it to death, but come on...

  // 2024-02-01
  $ curl https://www.taleworlds.com/en/News/552 | grep "Fixed a crash that" | wc -l
  29

  // 2023-12-21
  $ curl https://www.taleworlds.com/en/News/549 | grep "Fixed a crash that" | wc -l
  6

  // 2023-12-14
  $ curl https://www.taleworlds.com/en/News/547 | grep "Fixed a crash that" | wc -l
  101
Maybe feeling like you're iterating fast isn't the same as getting to the destination faster.

Edit: Lol guys calm down with the down-vote party. I was counting crashes, not bugs:

  $ curl https://www.taleworlds.com/en/News/547 | grep "Fixed a bug that" | wc -l
  308
Does your C++ not crash, just theirs?

That game (currently) has 88% positive reviews on steam and a 77 metacritic score with over 15.5k people playing the game right now (according to steamcharts.com)

Thats a lot of happy customers.


I can't really comment on the quality of the game or experience or how buggy it feels because I've never played it, but I will say that counting fixed crash situations is a somewhat arbitrary and useless metric. If each of those crashes affected and was reported by a single person or even nobody because no regular person could really encounter it is a vastly different situation than if each of those crashes was experienced by even 1% of the users.

The criteria by which something is decided to mention in the patch notes is not always purely because the users care. Sometimes it's because the developers want to signal effort to user and/or upper management.

Maybe Mount and Blade was super boggy in the past and is still super buggy now so all the crashes fixed are just an indicator of how large the problem is for them and how bad the code still is. I dunno, you didn't really give any information to help on that front.


> If each of those crashes affected and was reported by a single person or even nobody

Then do you really think they'd be spending time fixing it?

(Actually, you know what, they probably would.)


That's why I had a paragraph mentioning different reasons things might be mentioned. I don't think it's uncommon to find a bug that could cause a crash while working something else, confirm it does crash, and then fix it. If the culture is to mention those things in patch notes even if you're not sure it actually ever caused a user problem, then it will be listed.

That doesn't mean all, or even any, of the listed crashes were like that, but it does illustrate that it's hard to know what they actually mean without additional info.

(for what it's worth, I'm a long time Tarkov player, so I'm definitely familiar wroth buggy games and apparent development problems with rushing, so this is more a devils advocate position on my part)


With Rust and the exact time iteration times, management and deadlines, you end up with the same amount, just theyre panic!() instead. Thats an improvement, sure, but its fighting a symptom.

There are a bunch of useful clippy lints to completely disable most forms of panicking in CI. We use this at my work since a single panic could cost millions of $ in our case.

With modern languages that take safety more seriously, it's a lot easier to spot places where the code 'goes wrong'.

In an older language, you have nothing to tell you whether you're about to dereference null:

   foo.bar.baz = ...;
Even if you've coded it 100% correctly, that line of code still looks the same as code which will segfault. You need to look elsewhere in codebase to make sure the right instructions populated those fields at the right time. If I'm scrolling past, I'll slow down everytime to think "Hey, will that crash?"

Compare that with more safety focused languages where you can see the null-dereferences on the page. Unwrap() or whatever it is in Rust. Since they're visually present, you can code fast by using the unsafe variants, come back later, and know that they won't be missed in a code review. You can literally grep for unsafe code to refactor.


I love Rust, but a crashing released game is better than a half-finished "perfect" game, or a game where you couldn't iterate quickly, and ended up with a perfectly tuned, unfun game.

> a crashing released game is better than a half-finished "perfect" game

For who? I, and I'm pretty sure most other gamers, would rather a fully-finished "perfect" game that took twice as long.


> For who? I, and I'm pretty sure most other gamers, would rather a fully-finished "perfect" game that took twice as long.

Evidence suggests otherwise. Of all demographics, gamers appear to be the most tolerant of buggy software.

I'm playing a 2020 game right now that has (in about 30 hours of gameplay):

1. Crashed twice 2. Froze once 3. Has at least ONE reproducible bug that a player would run into at least once every mission (including the first one).

Since this game is now so old it's not getting any more patches, these bugs are there for all eternity, because they just do not move the needle on enjoyment by the gamer.

Searching forums for Far Cry 5 Bugs gives results like this: https://www.reddit.com/r/farcry/comments/1ai4jzx/has_far_cry...

Gamers just don't care about bugs unless it stops them playing the game at all!

In order for bugs to have an effect on gamer enjoyment, it literally needs to make the game unplayable, and not just make the player reload from the last savepoint.


> Evidence suggests otherwise. Of all demographics, gamers appear to be the most tolerant of buggy software.

Evidence suggests otherwise. Of all demographics, game studios appear to be the most tolerant of buggy software. bladeblablabla

Just go look at CP2077 or BF2042 or Fallout 76 or ...

So many games out there that no one wanted to play until they finally actually made a game that was ready for release, a year or more after they released it.


> 1. Crashed twice 2. Froze once 3. Has at least ONE reproducible bug that a player would run into at least once every mission (including the first one).

Sounds about on par even for enterprise software, in cases where shipping quickly is prioritized over overall quality, doubly so for gamedev which is notorious for long hours and scope creep.


The problem is we would have a lot less games and the games we would get would not be as fun. Rust appears to have the following problems:

1) As the article pointed out, game developers are less productive in Rust. This is a huge problem.

2) Game budgets are not going to get bigger. This means that if Rust reduces productivity, games are going to be less polished, less fun, etc. if they are written in Rust.

3) Game quality is already fine. 99% of the games I play have very few noticeable bugs (I play on an Xbox Series X). Even the games with bugs are still fun.

Basically, gamers are looking for fun games which work well. They are not looking for perfect software which has no bugs.


> As the article pointed out, game developers are less productive in Rust. This is a huge problem.

I don't think it's limited to just game developers though. Unless you are writing something in which any GC time other than 0ns is a dealbreaker, and any bug is also a dealbreaker, you're going to be less productive in Rust than almost any other language.


Oh, come on, we're yet again extrapolating from "Rust is bad at rapid iteration on an indie game" to "Rust is bad at everything". If Rust were really that astoundingly unproductive of a language, then so many developers at organizations big and small wouldn't be using it. Our industry may be irrational at times, but it's not that irrational.

> Oh, come on, we're yet again extrapolating from "Rust is bad at rapid iteration on an indie game" to "Rust is bad at everything".

I am saying that Rust development has a lower velocity than mainstream GC'ed languages (Java, C#, Go, whatever).

I didn't think that you are disputing this claim; if you are disputing this, I'd like to know why you think otherwise.


> I am saying that Rust development has a lower velocity than mainstream GC'ed languages (Java, C#, Go, whatever).

It depends what you measure

For software that must get it right Rust can be more productive. The early cycles of development are slow, especially for people who have not surrendered to the borrow checker, yet. But the lack of simple mistakes, or more accurately the compiler's early detection of simple mistakes dramatically speeds up development

But in a lot of software those mistakes, whilst important, will not "crash the aeroplane ", so it is not worth that extra cost in the early cycles

I am not a game developer, or player, but games are in that category I think


> I am saying that Rust development has a lower velocity than mainstream GC'ed languages (Java, C#, Go, whatever).

That's not what you said: you said you're going to be less productive in Rust than nearly any other language, not "mainstream GC'd languages".

> I didn't think that you are disputing this claim; if you are disputing this, I'd like to know why you think otherwise.

Depending on the domain, I am disputing that, because of things like the Cargo ecosystem, easy parallelism, ease of interop with native code, etc. There is no equivalent to wgpu in other languages, for example.


> That's not what you said: you said you're going to be less productive in Rust than nearly any other language, not "mainstream GC'd languages".

I feel that you're selectively reading only what you have talking points to respond to.

Here is exactly what I said:

> Unless you are writing something in which any GC time other than 0ns is a dealbreaker, and any bug is also a dealbreaker, you're going to be less productive in Rust than almost any other language.

I mean, I literally carved out an exception use-case for Rust; viz for software that can't handle GC.

I wrote a single sentence with a single point, not a a single point diluted over multiple paragraphs. You have to literally read only half-sentences to interpret my point the way you did.

If you aren't going to even bother reading full sentences, why bother engaging at all?


Would "you're going to be less productive in Rust than nearly any other language unless GC time or any bug are dealbreakers" be a fair summary of what you mean?

Either way, I fully disagree with that. Many more traits of Rust may make it a better choice even if the low productivity claim was true:

- integration with other languages - I know of companies successfully developing a single Rust library and just using thin wrappers for other languages they need support for

- data races detected at compile time - in highly concurrent applications being able to catch data races at compile time is huge. Please take a look at a blog post from the Uber team[1]. A dedicated team investigated 1100 data race occurrences. Data races may lead to bugs that are a PR nightmare for companies, like a bug in GitHub that sometimes resulted in a user being logged in to an account of another user[2].

- Embedded systems

- WASM - there are not that many languages that natively compile to WASM and have good tooling around it. For most GCed languages you have to go for "close enough" alternatives like TinyGo or AssemblyScript or use tools that bundle an entire interpreter in a WASM binary

But even outside these categories, I don't think it's universally true Rust is less productive than alternatives and my experience shows me otherwise. For example, in many domains, you don't care about the borrow checker and lifetimes almost at all. Take a look at a Todo Backend[3] I wrote in Rust[4]. If you take a look at one of the Go implementations of the same thing, you wouldn't probably see much of a difference because of the nature of web backends: you get some data in, you process the data, usually making some database queries, you return some data (or not).

What with stateful applications without a database, though? Surely that must be hell? Even here it's not as black and white as you would like to see it. When I was working at Hopin (once upon a time a unicorn startup scaling extremely fast) we had to implement a presence server - a service holding information on who is online and what event they're attending, which video they're watching etc. Nothing too complex, but we had a requirement to hold up to 100k open connections, and at the time we didn't have any infrastructure for that (most of the stack was Node.js and Rails). Someone wrote a proof of concept in Go using Redis as a backend with a queue and using Redis for leader election with a big caveat - each of the nodes had to process all of the queue items, so we were limited by a size and processing speed of a single Redis node.

When the time came to implement the production version I said: let's treat the application as a database. We cared only about current data. If the application failed, we could restart and clients would reconnect. If we wanted to have a history of presence we could push all of the events to Kafka or another queue, but still mostly use in-memory data for real-time needs.

I had some Rust exposure before, but it was my first production app. I was also joined by a person who had never written Rust before. In two weeks we had a working application while I was also making sure the other programmer codes as much as possible and doing a lot of pair programming. We deployed it shortly after. Then we added a few more features in the next two weeks or so.

The code was extremely simple - more or less a few hashes behind a WebSocket based API. As all of the data was living through the entire lifetime of the application we didn't have to care about borrow checker or lifetimes. We had an actore-like code - a few threads with each thread holding a data structure and a few channels that send commands. We were moved to other projects, so the presence server became unmaintained and even then it was working without any issues whatsoever for the next half a year or so. Then there was a big push to scale all of the services to handle a minimum 500k concurrent users, ideally a million. The Rust app didn't need almost any changes, after some kernel and load balancer tune up, it could handle up to 2 million connections frequently sending events on a single machine. If we wanted to, we could easily shard it, but there was no need.

The push to go more into real-time features was deprioritized by then, though, so the management said the app has to be rewritten to Node.js. There was one try to do that, which failed after two months or so. This is not to say you can't make an application like that in Node.js. You can, but you can't use the same architecture, cause you can't multithread Node.js applications, thus you have to run multiple processes, thus you have to have some kind of a database or a queue or a service you use (at the time they tried using one of the Pusher-like services, cause they didn't want to handle WebSocket connections themselves).

But even outside of specific examples like that - in my experience, I don't feel less productive in Rust when it comes to writing production-level applications, not necessarily critical or with wild performance needs. It's subjective, of course, but I agree with @pcwalton - if Rust was universally not productive, I don't believe so many companies would be using it.

One last thing to consider is the expressiveness of the language. In many languages, like Go, it's hard to make certain abstractions that are not a burden to use. Even after they introduced generics, most of the ecosystem is still using `interface {}` all over the place and projects like Kubernetes implement their own dynamic runtime type system. Recently I've been working on a load-testing tool running scenarios as WASM binaries called Crows[5] and one of the abstractions I've created is an RPC client that can send requests in both directions. At the code level, you use it like many RPC libraries in higher-level languages. You defined your interface [6] and then you can call it like it was a regular local method[7] which is huge when developing code, especially in an editor with LSP, cause it will show you what methods you can call and what arguments they take. What's more any typo would be caught at compile time as the server and the client share the same interface. In Go even official RPC client is like `client.Call("TimeServer.GiveServerTime", args, &reply)`, which can't be type checked as far as I know. I think the ability to create these kinds of APIs that are preventing you from doing the wrong thing is a huge advantage of the language.

  1. https://www.uber.com/en-DE/blog/data-race-patterns-in-go/
  2. https://github.blog/2021-03-08-github-security-update-a-bug-related-to-handling-of-authenticated-sessions/
  3. https://todobackend.com/
  4. https://github.com/drogus/todo-backend/blob/main/src/main.rs#L138-L151
  5. https://github.com/drogus/crows
  6. https://github.com/drogus/crows/blob/8eac9c9dfb3df3e5f329b5ba1ee85d37bceb6dc2/utils/src/services/mod.rs#L94-L105
  7. https://github.com/drogus/crows/blob/8eac9c9dfb3df3e5f329b5ba1ee85d37bceb6dc2/coordinator/src/main.rs#L80

Have you written much Rust?

Uhh, no, the games we got 15 years ago and before were definitely just as fun.

Hell no. Lots of these games take 5-7 years to make. You want to turn that into 10-14? I can live with the rare crash bugs.

What if it's 5-7, but only after there is a deep enough dev pool and language tooling to address some of the productivity issues mentioned in the blog? Why make up arbitrary x2 factors?

IDK, seems to me like studios did just fine putting release-quality games out at release 15-20 years ago shrug

"rare" LOL


No, the game doesn’t take twice as long. It just gets abandoned half-finished.

The world is full of half-finished games, it takes time and money to push to a finish.


Ah right that's why no games existed two decades ago.

It's a chicken-egg problem. You won't even see 10% of the bugs lurking in your game without releasing it to a wider audience, no matter how long you worked on it or how good your QA process is (that's what Steam's Early Access is for after all). YMMV depending on the complexity of the game of course.

But even if your game code is perfect and completely bug free, there are so many weird PC configs and buggy drivers in the wild that your game will crash for some users. And for the affected users it doesn't matter whether that crash is caused by crappy game code, or some crappy 3rd party software interfering with your game. For the user it's always the game's fault ;)


> You won't even see 10% of the bugs lurking in your game without releasing it to a wider audience, no matter how long you worked on it or how good your QA process is (that's what Steam's Early Access is for after all).

Just because they like to say that doesn't mean it's true. I've had access to see the list of known issues considered "critical" around release time for a few games. They know the bug exists, they just want to release it more than they want to fix it.

> But even if your game code is perfect and completely bug free, there are so many weird PC configs and buggy drivers in the wild that your game will crash for some users.

Which in no way invalidates the point that most modern games are absolutely unplayable for most users at release.

Oh yeah, and also that's why beta testing exists


perfect is the enemy of good. You never release anything thats perfect.

Perfect is impossible.


> "perfect"

> perfect

See the difference?


> I, and I'm pretty sure most other gamers, would rather a fully-finished "perfect" game that took twice as long

I have recently completed Cyberpunk Phantom Liberty. The game crashed 4-5 times during 100-150 hours of gameplay. The crashes were pretty much painless because I quick save often.

The game was amazing.

The development of the game started in 2012, 12 years ago. I’m not sure you or most gamers would rather want a fully-finished "perfect" Cyberpunk 2077 game released in 2036.


> 4-5 times during 100-150 hours of gameplay

Great, thanks for proving my point! If you had played CP at release, how many times would it have crashed?

Do you really think it would have taken them another 12 years to get to the point they're at now if they hadn't released it 4 years ago? SMH


Photoshop does crash. Trust me if you do enough image editing you'll know it's not even a super rare event. They're generally doing a poor job handling the situations where you have no enough storage or RAM.

It didn't stop Adobe from being worth 200B.


Hard to know what TaleWorlds are actually optimising for because half the features of Bannerlord feel like they’ve never been played by a dev let alone iterated on.

How many of those crashes were caused by memory safety issues though?

A lot of those crashes might simply be called a "panic" in Rust.


And yet the fact that Bannerlord game logic is entirely in C# makes this possible:

https://github.com/int19h/Bannerlord.CSharp.Scripting

which in turn makes it a lot easier and more convenient to mod. Try that with Rust...


Yeah this is a common problem in the industry, we rarely have enough time to refactor what should be considered prototype-level code into robust code.

The game dev industry could form a consortium to launch its own dedicated general purpose language built from scratch to compile very fast like V or Go, run predictability, be much safer, be more reusable, and be extremely productive with the lessons learned from C, C++, C#, and more.

Also, I think LLMs will be able to run against code bases to suggest mass codemods to clean things up rather than having humans make a zillion changes or refactoring fragile areas of tech debt. LLMs are already being applied to generate test cases.


Jonathan Blow’s Jai is an attempt at something like this. It’s looking promising so far!

Interesting. I went through the primer spec. Appears to be a different kind of D or Go with some key points. Any new language should begin with a specific thesis of specific competitive advantages and problems it solves over existing customary and alternative tools. Kai appears to fulfill this property, so that's a good sign.

> It is still in development and as of yet is unavailable to the general public.

Is it still the case?


C# is that language (see Godot, Stride, FNA, Monogame).

Not really, it was adopted. It originated from Microsoft as their post-J++ Java alternative for CLR for the purposes of making it easier to write banking server software and Windows apps.

Does it matter what it was 20 years ago? It is the go-to language for gamedev today and only keeps getting better at it.

Both things can be true. I'm saying it wasn't designed to be as such. I don't what you're arguing about.

I believe that better tooling can help, yes. With refactoring, debugging, creating performance and style reports, updating documentation and a ton of other stuff.

This comment is nonsense

My impression is that this is due to their non-robust programming style. They do not add fallback behavior when e.g. receiving a null object. It would still be a bug, but could be a log entry instead of crash.

> My impression is that this is due to their non-robust programming style.

It's been 50+ years. I don't think that it's worthwhile just telling the programmer to do a better job.

> They do not add fallback behavior when e.g. receiving a null object. It would still be a bug, but could be a log entry instead of crash.

This is a pretty big feedback loop:

  * The programmer puts the null into the code
  * The code is released
  * The right conditions occur and the player triggers it
  * IF DONE SKILLFULLY AND CORRECTLY the game is able to recover from the null-dereference, write it out to a log, and get that log back to the developers.
  * The programmer takes the null out of the code.
If you don't do the first step, you don't get stuck doing the others either.

50+ years and people still fail to grasp this.

You have to put something (an optional, or a default constructed object in a useless state) and all you did was to skip the null check. In case of optional, you introduced a stack rewind or a panic. Everything else stayed the same. Maybe that default even deleted the hard drive instead of crashing.

Coding is hard. "just don't code" is not the answer. You can avoid something, that doesn't mean it won't show up in some other fashion.


Again, if you disallow unwrapping and panicking at the CI level, you actually force your developers to properly handle these situations.

> You have to put something (an optional, or a default constructed object in a useless state)

No, you really don't. There is no default number, no default string, no default piece of legislation, no default function.


Arbitrary recovery to null pointers isn't a good way to do robust programming. I recommend doing the exact opposite actually.

https://en.wikipedia.org/wiki/Crash-only_software

https://medium.com/@vamsimokari/erlang-let-it-crash-philosop...


A crash of an actor in BEAM is incomparable to a crash of a video game.

Is it? Is there no reasonable case where you have a subsystem in a game crash, then restart itself? Unless I'm mistaken, I've experienced this myself in video games more than once. Anything beats a full crash with a pointless error message.

I feel like a lot of people of HN think making a game is like making a web service or a GUI application. Yes, this behavior is used in video games sometimes, "restart itself" often means reloading a save file or something similar.

But if your video game uses a DSL for actors then you can do it in the DSL, which avoids special arbitrary bug-hiding behavior.

I dare you to board a plane whose software was written that way.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: