Hacker News new | past | comments | ask | show | jobs | submit login

It's frustrating to see the C++ committee spend year after year on pointless new over engineered libraries instead of finally fixing the compile times. On a high level view, with only one change to the language, we could entirely eliminate this problem!

Consider the following theoretically simple change:

A definition in a file may not affect headers included after it. If you want global configuration, define them at the project level, or in a header included by all files that need it.

i.e. we need to break this construct:

    #define MY_CONFIG=1
    #include "header_using_MY_CONFIG.h"
Thats really all we need to do to completely eliminate the nonsense that is constant re-parsing of headers and turn the build process into a task graph where each file is processed exactly one time, and each template is instantiated exactly one time, the intermediate outputs of which can be fully cached.

Most real-world large projects already practice IWYU meaning they are already fully compatible with this.

There are some videos by Jonathan Blow on how this is exactly how the Jai compiler is so fast. Why must we still suffer with these outdated design decisions from 50 years ago in C++? Why can't the tech evolve?

/end rant




I used to think the same thing myself, but now I'm not sure that would solve much. The problem you describe is really just a matter of caching. I believe you should be able to process the tokens of a header to determine all the identifiers in it, at which point you can model the header compilation as a memoised function from the definition of those identifiers to the compiled file. So for your example, every include of the header where MY_CONFIG=1 could re-use the same results.

The real issue is just that C++ compilers are horrendously slow. They have been designed with the intention of producing fast executables rather than compiling quickly. Think of Rust, which has a high degree of structure in its compilation process and so a high degree of opimisability, yet it still suffers slow compilation due to its usage of a C++ compiler backend.

I think this really because C++ builds are fundamentally unstructured. Rather than invoking a compiler on the entire build directory and letting it handle each file, it is invoked once for every file in a way that might be non-trivial. Improving the build process almost always comes at the

Beyond that, C++ developers simply do not care about slow compilations times. If they did, they wouldn't be using C++. It's my personal theory that C++ as a language has effectively self-selected a user-base that is immensely tolerant of this kind of thing by driving off anyone who isn't.


> I think this really because C++ builds are fundamentally unstructured. Rather than invoking a compiler on the entire build directory and letting it handle each file, it is invoked once for every file in a way that might be non-trivial.

True, this would also need to be fixed. Compilation would need to become a single process that can effectively manage concurrency and share data.


Aren't there modules since c++20 which solve this problem?


Yes, but we are not exactly there yet: https://arewemodulesyet.org/

Edit: Saw after posting this was already posted in a top level comment.


Modules are a massively over engineered "solution" to the problem that require significant refactoring to actually make use of them. Have you tried to properly use modules (i.e. create ones in your software, not just import std)? It's super clunky and still hardly usable.

I doubt we'll see Unreal Engine get any benefit from that in a long time for example. It could be so much better, working fully automatically with almost all existing code so long as you use IWYU, which is already standard for large projects where this is needed the most.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: