How to determine graphics card memory

Started by
9 comments, last by amtri 4 years, 3 months ago

Hello,

I have coded an OpenGL program with shaders that implements Order Independent Transparency using a linked list approach. That works fine.

If there are too many fragments the memory I allocated is not enough. Naturally, I then try to allocate a large chunk of memory from the video card. I know when that fails, so that's not a problem either; I have proper workarounds for that.

The problem is that I am probably requesting so much memory from the video card that I'm probably not leaving a lot of margin for anything - and I get a TDR. This only happens when I'm really stressing the system (e.g., zooming into a region such that I end up with lots of fragments to be processed).

I thought perhaps the best thing to do is simply to find out how much memory is available in the graphics card and limit my allocations such that I'm always fairly safe so that will not happen.

The program runs on both Linux and Windows. Are there ways - preferably, standard C/C++ calls - that would query the video card memory so I can always leave sufficient memory around? I understand I won't be able to allocate all the memory I need, and that's OK. I can deal with that. What I can't deal with is a TDR!

Thanks.

Advertisement

For Nvidia cards you can use this:

http://developer.download.nvidia.com/opengl/specs/GL_NVX_gpu_memory_info.txt

GL_NVX_gpu_memory_info

For ATI you have this:

https://www.khronos.org/registry/OpenGL/extensions/ATI/ATI_meminfo.txt

GL_ATI_meminfo

What you want to consider is that you really shouldn't worry about VRAM, however you can test the time it takes for batches against the TDR and split into smaller batches as needed (you start small, test and see how much scaling you can do based on the potential to hit a timeout). Otherwise you could consider just increasing the TDR limit or disabling TDR limit all together but I would advise fixing the problem. Keep in mind that when you make registry edits like this any driver update can (usually does 100% of the time in my experience) reset your TDR registry edits to default, and Windows Updates have done it for me as well.

Programmer and 3D Artist

Hi Rutin,

Thanks for the suggestions. I'll take a look at these calls. But I agree with you: it's best to get to the bottom of the problem.

I was told to stay away from vendor-dependent API calls. So specific memory checks for some graphic cards won't work for me.

However, I did run into an issue I'm unable to understand. Using process explorer, I noticed that when I allocate the linked list for my OIT with glBufferData the GPU memory usage does NOT go up. And no GL error is issued. It appears that memory gets allocated on the first rendering call I make. That's a reasonable implementation.

My GPU has 4GB of memory. I stretched things out and requested 12GB - which I know I don't have. However, at no point did I get a GL error - not at allocation and not during rendering. I was expecting to get a OUT_OF_MEMORY error, but I didn't. However, my image was not displayed.

I had been counting on at least getting a GL error.

Can anybody explain to me why I can successfully request more memory from the GPU than is available without an error?

Thanks.

I suppose it's page-swapping and such, if you for instance allocate more memory for textures than what's available you'll get drastically reduced framerates, as data is potentially swapped in and out each frame.

.:vinterberg:.

Are you actually writing to all 12 GB of memory?

OpenGL isn't really meant for this sort of thing - it's designed to abstract everything about GPU memory away from the user, and as you have discovered, it'll happily swap gigabytes of data back-and-forth between main memory and GPU memory in order to pretend that there are no memory limits.

Moving to a modern low-level graphics API like DX12 or Vulkan would give you explicit control over memory allocation (at the cost of also having to take explicit control over everything else in your graphics pipeline).

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

Hi swiftcoder,

No; I'm not actually writing to all 12GB of memory. The point is that some people's experience with the program is that, at times, nothing gets drawn. It's possible they are using a computer with much more limited resources than I normally use. So I tried to mimic the behavior by requesting an extraordinarily large amount of memory just to see how memory failure reveals itself.

To my surprise, it doesn't. I never get a GL memory failure error, as I expected, and from which I can recover or, at the very least, issue an error and gracefully quit. What I observe is that nothing is drawn and I have no way of controlling this behavior as there's no indication on the GL error returns that anything wrong occurred.

Perhaps on another application we'll move to Vulkan. But, for the moment, there's just too much invested in OpenGL and I need to find a way to detect that nothing is being drawn.

amtri said:
No; I'm not actually writing to all 12GB of memory

I'm not convinced that OpenGL will actually try to allocate most of that memory if you don't write to it. If you try to fill all 12 GB with random data, do you get an error?

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

That's a very good question.

So this is what I did: I stepped through the debugger so I could monitor both heap and GPU memory - the first with the task manager, and the second with process explorer.

Immediately after calling glBufferData requesting 12GB I got no GL error and the call to glBufferData was very quick.

Next I called glBufferSubData and requested that I set the first and last bytes only in my 12GB buffer to 1. That took quite some time. In the end, the heap memory usage was around 12GB. The dedicated GPU Memory was 36.4MB, and the Committed GPU Memory was 57.5MB. But the System GPU Memory was 3.2GB. But no GL error!

And these calls took quite some time - around 10 seconds - and in the end nothing was drawn.

If I allocate a “reasonable” amount of memory then that memory goes all into dedicated GPU Memory and the calls are quick and the image gets drawn.

So it appears I can't rely on glGetError to determine whether the requested memory is reasonable. My thinking now is to use an ad hoc approach where I simply time how long it takes to set these 2 bytes - the first and last - and, if the time is unreasonable, I will treat it like a GL error.

But I would have preferred to get a GL error rather than doing this.

amtri said:
In the end, the heap memory usage was around 12GB. The dedicated GPU Memory was 36.4MB, and the Committed GPU Memory was 57.5MB.

It looks like this still isn't actually uploading your buffer to the GPU - It's just zeroing out a 12 GB buffer in main memory, and waiting to see what you do with it. Or if it is uploading it, it's inferring that there are ~12 GB of zeroed pages in the middle, and only uploading the start/end pages.

I think you'd have to fill a decently-large chunk of that buffer with random data, and then read from it in a shader to force the driver to upload some/all of the buffer.

But you also may also just be at the mercy of an implementation that doesn't return GL_OUT_OF_MEMORY ?‍♀️

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

This topic is closed to new replies.

Advertisement