Hacker News new | past | comments | ask | show | jobs | submit login

Generally when benchmarking compile speeds, the unoptimized build is used, as that is the edit-compile-debug loop. It's always been true that a good optimizer will dominate the build times.

Back in the Bronze Age (1990s) I endeavored to speed up compilation in a manner that you describe ccache as doing. After the .h files were taken care of, the compiler would roll out to disk the state of the compiler. (It could also do this with individual .h files.) Then, instead of doing all the .h files again, it would just memory map in the precompiled .h file.

And yes, it resulted in a dramatic improvement in compile times, as you describe.

The downside was one had to be extremely careful about compiling the .h files the same way each time. One difference could affect the path through the .h files, and invalidate the precompiled version.

It was quite a lot of careful work to make that work, and I expect ccache is also a complex piece of work.

What I learned from that is it's easier to just fix the language so none of that is necessary. C/C++ can be so fixed, the proof is ImportC, a C compiler that can use imports instead of .h files, and can compile multiple .c files in one invocation and merge them into a single .o file.




There is a reason of why

> unoptimized build is used, as that is the edit-compile-debug loop

is no longer true.

Modern C++ has a lot of metaprogramming abstractions in it and they are no cost only in optimized builds.

In my years of gamedev work I have not met a sizeable project that was working in unoptimized builds even for debug purposes. Unoptimized only worked in unit tests or small tools.


I think at that point the real solution is to seriously consider all of the language constructs you use and their compile times as well. It's not a given that using more of C++ is always better and real, sustainable change in compile times can be had by moving more and more towards C in many ways but keeping some of the safety C++ provides.

(I'm sure you've been there, though; gamedev is one of the areas I would expect people to be more sensible about their C++ feature usage in.)


If you are implying that we can go back to force inlining everything and only using small wrappers around memcpy then I will have to say that that ship has sailed years ago. I do not know anyone who wants to go back for more than brief moments while changed header causes cascade of rebuilds.

Now the elephant in the room of build times that no one wants to talk about is 'the optimized' build with PGO+LTO. I think none of the projects I worked that got used to it ever did a local pipeline to do it xD. But if you ask people if they want to ship a build without it the answer is clear 'no'.

I will totally understand if authors of the linked article also do not like to talk about it. What I am trying to do here is to clear confusion about importance of it. Pretending that IWYU is more than polishing of last 5% of build times helps almost no one. YMMV ofc.


There are plenty of constructs in C++ that are safer than C and still don't impact compile times that much and some that, while safer and better in some regards, are murder for compile times. I'm saying there is a tradeoff to be made and faster iteration speeds are oftentimes more valuable for end result quality than (often perceived) safety.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: