🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

arithmetic overflow

Started by
12 comments, last by JoeJ 3 years, 5 months ago

Hi

doing this:

unsigned char* textureData = new unsigned char[info->tex_height * info->tex_width];

I get this:

Warning C26451 Arithmetic overflow: Using operator '-' on a 4 byte value and then casting the result to a 8 byte value. Cast the value to the wider type before calling operator '-' to avoid overflow (io.2).

What to change? Many thanks

Advertisement

There is no ‘-’ in the code line you have posted, so i guess the warning refers to another line?

In general, such warnings often can be ignored, example:

std::vector<int> stuff(100);
int count = stuff.size(); // gives a warning

The vector uses size_t for the type, which is 8 bytes for a 64bit program. So if you use (4 bytes) int as here, it could happen the count variable has not enough bits to represent huge numbers. But if you are certain you will never use such huge amounts of data (almost always), the warning can be ignored or disabled with a #pragma.

I would advise against ignoring any warnings, for two reasons:

  1. If your code issues a bunch of warnings when it compiles, there is a good chance you will miss an important warning in the noise.
  2. Changing your code to eliminate the warning often results in slightly better code. For instance in JoeJ's example, you can use a type that is more explicit about the fact that you are dealing with the size of some structure in memory:
std::vector<int> stuff(100);
size_t count = stuff.size(); // probably no warning

alvaro said:
size_t count = stuff.size(); // probably no warning

I would agree with your points, but in practice it gives me too much issues, so i end up using the pragma to disable the warning very often :/

Try to give an example…

std::vector<Node> nodes;
for (int i=0; i<nodes.size(); i++)
{
int fourthChildI = nodes[i].fistChild + 4;
if (nodes[fourthChildI].parent == -1) Log("bug found!");
}

Reasons i want to use int over size_t:

  • I am sure 32 bits are enough for my data, so why using 64? On some (ore any) platform this could affect performance?
  • size_t is unsigned, but using signed types needs less attention while unsigned is error prone. I'm fine with sacrificing another bit here to keep it simple.

The only solution for me would to cast like so:

for (int i=0; i<(int)nodes.size(); i++)

Which is harder to read and thus worse than using the pragma, IMO. Still, i see this looks bad and fishy. It's just that using int is quick and short, and i don't see a need to use custom types for everything.

Sorry it should be ‘*’ and not ‘-’

tex_height and tex_width are int.

Still could need help with my code. Thanks

const size_t NO_PARENT=SIZE_MAX;//if real indices reach SIZE_MAX you have bigger problems than overflow
//...
std::vector<Node> nodes;
//... 
for (size_t i=0; i<nodes.size(); i++)
{
    size_t fourthChildI = nodes[i].fistChild + 4;//maybe firstChild?
    if (nodes[fourthChildI].parent == NO_PARENT) Log("bug found!");//if Node.parent values are correctly set
}

What would you need a signed index for?

Regarding the original question, if info->tex_height and info->tex_width are 32 bits, signed or unsigned, the compiler figures out that their product doesn't fit in 32 bits and you could avoid overflow by doing a 64 bit multiplication instead (by previously casting at least one of the values to a 64 bit type). It isn't actually related to signed values or not using size_t.

Omae Wa Mou Shindeiru

LorenzoGatti said:
What would you need a signed index for?

I fail to come up with an example, but whenever i try to use unsigned types for indices, shortly after i regret it because it causes some extra work, and switch back to signed. (I'm forced to be more strict here on GPU because GLSL gives an error on mixing signed / unsigned. That's not really helpful but annoying.)

LorenzoGatti said:
if (nodes[fourthChildI].parent == NO_PARENT)

Your solution is good practice, but the parent index should be 32 bits only to save memory, so some casting is often unavoidable.
It's not that doing proper casting would be an evil, but in my case it really would clutter my code because i do so much math on indices in general.
Probably the primary reason for my issues is using a pre C++11 codebase, and my progress towards modern C++ is somewhat clumsy.

MikeCyber said:
Sorry it should be ‘*’ and not ‘-’ tex_height and tex_width are int.

So you say a multiplication using the same type gives a warning? :O
I tried it myself, disabling all pragmas:

	constexpr int a = 0x10000000;
	constexpr int b = 100;
	int c = a * b;

and this is what i get:

warning C4307: '*': signed integral constant overflow

But i do not get a warning if i remove constexpr:

int a = 0x10000000;
	int b = 100;
	int c = a * b;

Or if i use numbers that don't cause overflow, also no warning:

	constexpr int a = 200;
	constexpr int b = 100;
	int c = a * b;

Maybe you have set warning level to a higher level than the default of 3?

But even then i do not understand why you get a warning at all.

No I have w3. Any help please.

JoeJ said:
I am sure 32 bits are enough for my data, so why using 64? On some (ore any) platform this could affect performance?

You are quite likely paying the cost of conversions that you didn't intend. It is faster to keep them all as the same type and do the math than it is to do the conversions.

Sometimes those conversions you pay for happen inside the CPU core itself. Modern processors have moved to 64-bit processing internally so many of the 32-bit, 16-bit, and 8-bit operations are generally slower. For many operations they do the work as though it were a 64-bit value sometimes having to extend the value out, do the math, and then chop it back down. Benchmark tools like PassMark PerformanceTest have found notable differences on many CPUs which they have occasionally discussed in their forums. For example, 64-bit integer multiplies may be 6x faster than the 32-bit integer operations, or 64-bit division might be 4x faster than the same 32-bit integer division. Exact numbers depend on the chips and the operations involved.

None of this is new. The performance difference was noted about 15 years ago when Intel and AMD moved to 64-bit cpu cores. They continue to focus first on 64-bit (and with modern extensions also 128-bit, 256-bit, and even 512-bit) optimizations and leave the old, smaller cases behind as historical footnotes. The engineers might have left in faster hardware designs specific to the 32-bit or 16-bit operations, but quite often they don't bother and simply do the work at 64-bit (or higher) precision.

If your data is 32-bit use a 32-bit type like u32 or i32 or int32_t or whatever you want in your system. If it is 64 bit do likewise, using the actual size of the data. If you're working with memory sizes then size_t is the proper data type, not int or int64_t or int32_t, or other variations. Don't intentionally do conversions you don't need.

frob said:
You are quite likely paying the cost of conversions that you didn't intend. It is faster to keep them all as the same type and do the math than it is to do the conversions. Sometimes those conversions you pay for happen inside the CPU core itself. Modern processors have moved to 64-bit processing internally so many of the 32-bit, 16-bit, and 8-bit operations are generally slower.

Thanks, that's the argument i needed to hear.
Seems sticking at int in general is just lazy and i need to type longer type names in the future… : )

This topic is closed to new replies.

Advertisement