🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Long build times.

Started by
20 comments, last by SuperVGA 3 years, 9 months ago

I have recently noticed some posts discussing long build times, like hours. How does this work? This is common?!?!?

Josheir

Advertisement

C++ has long build times because its compile-model is ancient, hard to get right and in generel not very optimal for large codebases. Even so, C++-codebases can get very large. Fortunately there is solution in the newest c++-standard, but its going to take a while before its fully available and it will take considerable amount of work to apply to an existing codebase.

Don't make changes to header files = small build time.

🙂🙂🙂🙂🙂<←The tone posse, ready for action.

Juliean said:
Fortunately there is solution in the newest c++-standard

If you point to modules here, than we will definitely have to wait for another few standards to see if that really helps or just makes everything more complicated at all. From the syntax it looks like not well/long enougth thought about (like quite a lot in the last few satndard releases) so in my opinion, one should not rely on that too much.

Josheir said:
How does this work? This is common

It depends how you setup your code base. A monolythic code base that seems to be quite common in C++ and especially game projects (looking at you Unreal <.<) has a lot of internal dependencies, so headers are including headers that include headers and so on (boost made this a feature). Don't get me wrong, this is ok for certain level of hirachy but as @fleabay already said, changing a line in one of those hirachy's upper levels will throw the change down to the bottom most level as well.

This is how C++ compilation is designed. The process starts on your code files (.c/ .cpp) and creates a separate compilation unit for each file in your project. On the other hand, if you don't have any code files, nothing will be compiled at all (which also hides syntax errors in header-only libraries). A compilation unit is reading the file from top to bottom and adds everything found in an #include directive to the internal text buffer as through it was written directly in the code file. This means that your headers are imported as text, not tokens and so isn't cached for reuse, every compilation unit does this on it's own without communicating to other units.

It however woudln't even be possible to communicate or cache tokens because of the way the preprocessor is working. It is a pure text processor that hasn't any knowledge about a syntax tree or even code tokens, anything it does is simple text replacement. That's why it is totally up to the programmer to generate meaningful source code from macros but on the other hand offers the incredible power of generating source code from macros (which people seem to fear these days, don't ask me why). Anyways, the preprocessor output is subject to the compiler which will then parse the text into code units and do all the other C++ magic.

So in summary, a project has huge build times if it uses a lot of header files per compilation unit and perform changes to those headers that are more on the top of those chains while the preprocessor isn't able to cache code tokens because it is just a text replacement unit rather than a parser.

__________________

But there are also legacy solutions you don't need the newest standard for. They depend on how you structure your project and so involve a lot of management/self-discipline and planning on how you want to construct your project even before writing the first line of code.

What we do for our game engine project is to modularize everything! This means we have a single core module, that is the biggest of our modules, which includes just basic stuff like macros, base types but also code every other module will use later like our self-written container classes. Any other module that is written on top of that core just fullfills a single atomic purpose, for example disk I/O, threading, math and networking.

Every module has it's own project and is build as static linked library, which comes at nearly no cost in C++ related to compile and runtime. However, linking may take a while longer as in a monolythic project. It still happens that changing code in a module that is more on the root of a hirachy is causing other modules to recompile as well, which also introduce certain amount of build-time, but because our modules are so small, it is less likely that it will effect a lot of other modules as well.

We have our own build management tool which analyzes the code once and for files that changed and adds module references on build or when generating a Visual Studio solution for the project, so we don't have to bother with anything of that by our own and it is really fast to setup new projects/modules as well

fleabay said:

Don't make changes to header files = small build time.

+ And only include the stuff you need.
+ Adding further onto that; include just the stuff you need in your header files to satisfy the interface - if only your implementation uses <long_utility_file.h>, then let the include stay there and not in the header file.

Shaarigan said:
If you point to modules here, than we will definitely have to wait for another few standards to see if that really helps or just makes everything more complicated at all. From the syntax it looks like not well/long enougth thought about (like quite a lot in the last few satndard releases) so in my opinion, one should not rely on that too much.

I disagree. Modules in its current form have a few benefits that make it worth using in my opinion:

  • You can control which things in your “header” are being exported (= usable/visible when the header is used)
  • You can include more “groups” of headers than before. What I mean is that currently, if you really want short build times you pretty much have to selectively include headers as you need them. Say for a GUI-system, you are better of including the different types of widgets only where you need them. In the current version of modules, you can group all widgets into gui.widgets and “include” that, without having the same negative effect on build-times that you would have expected with the pre-modules build-chain

I don't expect compile-times to be perfect straight away (as most compiler-vendors already said that they are first aiming at getting modules working and then worry about optimization), but even in the current standard-version its definately the way to go for the future if you ask me.

Let's say that building one executable, with one toolchain, takes one minutes, which seems about right on average as a very rough estimate. Now multiply this by these numbers:

  • x8 for different toolchains/platform: Windows (32 bit and 64 bit), Linux (32 bit and 64 bit), MacOS (always 64 bit), Linux on Raspberry Pi (32 bit and 64 bit), plus Emscripten. That's assuming just one toolchain per platform - I could easily double this by using both gcc and clang.
  • x3 for basic configurations: release, debug, and release with debug information.
  • x2 for both Steam builds and DRM-free builds.
  • x60 for different executables. This includes different games (which use shared library code and therefore need to be recompiled when that shared library code changes), different tools (needed to process the data files the games use, also using the shared library code), and a few standalone unit tests for the shared library code.

Multiply that together, and a full rebuild takes about 48 hours, not counting the time taken by recompiling third-party libraries and actually running the tools to process the data files.

Some tools like Unreal also have a cook phase, which many people bundle together as build times. Every asset is evaluated, and if the engine cannot prove that the old one is still valid it will re-cook them; process textures at all detail levels for the target architecture, optimize meshes at multiple detail levels, merge assets to reduce render calls, compute lighting, and much more.

In big games a full cook can take many days, even when distributed across multiple computers.

Others have covered build times for compilation. Games that rely on large engines (again, Unreal for example) must do all that work for the engine as well as the game's code. A full rebuild of the code from scratch can take multiple hours for the reasons given above, the code is complex with many details to evaluate.

[quote]C++ has long build times because its compile-model is ancient, hard to get right and in generel not very optimal for large codebases.[/quote]

This is not true.

C++ has long build times because:

  • Some engineers have never learned good build tool practice.
  • The C++ template type derivation logic is factorial in complexity.

You can fix this by writing accurate build files, not putting unnecessary dependencies in header files, and being very careful about whether you really need to use C++ type derivation magic, when perhaps a slightly uglier macro will do.

Note that any type derivation language will have the same problem – Scala, Haskell, Swift, Rust, and C++ templates all share the same underlying algebra.

Unreal build times are long (but not “hours”) because:

  • Unreal includes a MASSIVE amount of separate subsystems and libraries.

You can fix this by having enough RAM/Cores in your build machine. Or machines.

Specific game projects in specific tools have long build times because:

  1. Certain pre-computation steps, like lighting bakes, need to consider a very large number of object interactions – every polygon in every object could conceivably interact with every other polygon (although, in practice, you will put limitations on how far this interaction can go.)

You can work around this by turning off features that require lots of pre-computation, or by adding many machines that let the build run in parallel (assuming your build tools are smart enough to parallellize the build.)

enum Bool { True, False, FileNotFound };

hplus0603 said:

Unreal build times are long (but not “hours”) because:

  • Unreal includes a MASSIVE amount of separate subsystems and libraries.

You can fix this by having enough RAM/Cores in your build machine. Or machines.

Please elaborate on how you'd go about reducing linking time by increasing RAM or cores (Or machines)?.

This topic is closed to new replies.

Advertisement