Hacker News new | past | comments | ask | show | jobs | submit login

I used to think the same thing myself, but now I'm not sure that would solve much. The problem you describe is really just a matter of caching. I believe you should be able to process the tokens of a header to determine all the identifiers in it, at which point you can model the header compilation as a memoised function from the definition of those identifiers to the compiled file. So for your example, every include of the header where MY_CONFIG=1 could re-use the same results.

The real issue is just that C++ compilers are horrendously slow. They have been designed with the intention of producing fast executables rather than compiling quickly. Think of Rust, which has a high degree of structure in its compilation process and so a high degree of opimisability, yet it still suffers slow compilation due to its usage of a C++ compiler backend.

I think this really because C++ builds are fundamentally unstructured. Rather than invoking a compiler on the entire build directory and letting it handle each file, it is invoked once for every file in a way that might be non-trivial. Improving the build process almost always comes at the

Beyond that, C++ developers simply do not care about slow compilations times. If they did, they wouldn't be using C++. It's my personal theory that C++ as a language has effectively self-selected a user-base that is immensely tolerant of this kind of thing by driving off anyone who isn't.




> I think this really because C++ builds are fundamentally unstructured. Rather than invoking a compiler on the entire build directory and letting it handle each file, it is invoked once for every file in a way that might be non-trivial.

True, this would also need to be fixed. Compilation would need to become a single process that can effectively manage concurrency and share data.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: