Design Patterns and performance tradeoff for methods with too many arguments

Started by
5 comments, last by frob 3 years, 3 months ago

Some times we need to create methods that have too many arguments, take a method called “createTexture” as an example. This method would require several arguments like width, height, magnification filter, etc. The increasing number of arguments would result in a smelling code!. Design patterns like Builder would be seen as a solution, however, looking from the performance perspective using such design patterns would not be seen as a good idea as many methods will be called. The question is what we should do to keep the code clean and efficient? Which design patterns or methods you suggest for methods with too many arguments?

Advertisement

Pooya65 said:
looking from the performance perspective using such design patterns would not be seen as a good idea as many methods will be called

Generally "method calls" is not a great way to quantify performance. As long as your Builder object is just plain old data, builder methods ought to all inline just fine, and have no performance impact (at least in optimised/release builds) versus just passing parameters into the method in the first place.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

Pooya65 said:
looking from the performance perspective using such design patterns would not be seen as a good idea as many methods will be called.

As swiftcoder says, code that you write and code that the compiler actually produces to be performed are two very different things.

Compilers have grown to eliminate code, inline function calls, and generally shuffle stuff around for better performance. In other words, what you write is likely not exactly what the computer actually does.

There is however a secondary performance issue which I consider much more important, namely ourself. We can worry to no end about computer performance, but this computer device measures its performance in milli-seconds or less. We, human beings however, measure performance in seconds to hours. So I prefer to write code that makes my life simpler so I can write code faster rather than trying to squeeze a few milli-seconds performance from a CPU by spending hours to days programming it.

Yes, performance is important, but only after you have observed you have a problem. Otherwise, don't be stupid in wasting cpu time (ie no bubble-sort for sorting a million elements), and optimize the program for yourself by writing clean and understandable code.

And if you don't have to pass every argument on every method call, just use a default parameter or an overload to the full listed function for things that barely change

Unforunately, in regards to inlining, you have to be a bit careful. By default, if you have a method with a split definition and declaration, the compiler will not be able to inline such a method outside of its translation unit (= in any other header or source where it is used). There exists optimization-strategies to alleviate that, like full-programm linktime-optimization in VS, but those are really expensive, and I have worked on project that were so large that this optimization could not be performed without the linker crashing.

So while I do agree with the notion that you shouldn't worry about method call performance too much in general, this is something you have to keep in mind; and if you want a method which absolutely should be inlined you kind of have do define it in the header.

Pooya65 said:
The question is what we should do to keep the code clean and efficient?

The efficiency difference to the CPU is nominal. You're talking about cache memory and CPU registers which are basically free on modern CPUs. For a great many actions the actual CPU time is exactly the same if you touch something a single time versus if you touch it twenty times; once it is already inside the core the amortized cost is zero. By far the biggest performance cost these days is keeping the CPU fed with data and instructions, not the cost of the processing itself. It is not universally true, and learn to use your profiler, but once something makes into the instruction or data cache it very often vanishes from performance measurements.

This is part of the reason why batch processing can be so much faster. Well-designed batches keep the instructions and data in ways that are friendly for the CPU prefetch system, hence the frequent focus on data-oriented programming. The prefetchers are quite good at recognizing data loading patterns, even patterns between function calls, so do your best to help it along. In contrast, calling tiny functions thousands of times with less of a pattern or constantly bouncing in and out of systems make it harder to keep the CPU fed with data and instructions.

The efficiency of programmers is important, too. You need to have “enough” parameters to get the job done. If that means passing a series of parameters, or passing a single structure full of data, or keeping it as member variables, that's going to depend tremendously on the task at hand.

This topic is closed to new replies.

Advertisement