Directx9 HLSL texture issue (help solve a 10+-year-old bug!)

Started by
2 comments, last by blairhartley99 8 months, 1 week ago

I have a DirectX9 application I wrote in the early noughties that simulates paint blending and mixing. Back then I had an NVidia GPU, and specified a similar GPU wherever the application was installed. Here's the thing… it works perfectly on any NVidia GPU, but there is a problem with either Intel embedded or AMD GPUs. I didn't need to fix the issue, but recently have wanted to track down what is going on.

The Directx9 application uses a multi-texture, multi-render target pipeline based around HLSL shaders. I've tracked down what is happening with non-NVidia GPUs… basically texture sampling in some shaders only samples the top-left texel. In other words, the shader isn't traversing the texture source as you'd expect. (But does so fine on NVidia GPUs).

I have assumed that this is likely some setting that I'm not applying, but that defaults one way for NVidia, and the other way for non-NVidia. But I can't find what I'm actually doing wrong.

Here's an example HLSL pixel shader that is showing the problem:

PS_OUTPUT painting_rotation(VS_OUTPUT Input)
{
	PS_OUTPUT	output;

	output.v3_colour[0]	= tex2D(fs4_pt_sat_sampler,			Input.v2_current_texture_coordinates);
	output.v3_colour[1]	= tex2D(paper_background_sampler,	Input.v2_current_texture_coordinates);

	return output;
}

VS_OUTPUT is defined as follows:

struct VS_OUTPUT
{
    float4 v4_pos : POSITION;
    float2 v2_current_texture_coordinates : TEXCOORD0;
    float2 v2_back_texture_coordinates : TEXCOORD1;
};

Neither TEXCORRD0 or TEXCORRD1 are being updated by the shader as it runs across the texture.

PS_OUTPUT is defined as follows:

struct PS_OUTPUT
{
    float4 v3_colour[2]						: COLOR;
};

Setting up the call to this shader, the vertex definition looks like this:

struct SCREENVERTEX		
{
	D3DXVECTOR4			v4_pos;
	D3DXVECTOR2			v2_current_texture_coordinates;
	D3DXVECTOR2			v2_back_texture_coordinates;

	static const DWORD	FVF;
};

FVF is defined as follows:

const DWORD	SCREENVERTEX::FVF =	D3DFVF_XYZRHW |	D3DFVF_TEX2
								| D3DFVF_TEXCOORDSIZE2(0)
								| D3DFVF_TEXCOORDSIZE2(1);

There are no doubt countless other clips I need to provide to give clues why this isn't working. It's worth reminding yourself that with NVidia cards, this all works perfectly.

If anyone can help point me get to the bottom of what is going on, I'd be very grateful.

Many thanks, Tim

Advertisement

Are you sure you are not violating device capabilities somehow?

And to use shaders, you normally use vertex declarations, not FVF. Try replacing your FVF with a vertex declaration.

Hi @aerodactyl55 , many thanks for your reply.

Yes - agreed re. device capabilities. It was one of the first things I checked… and all the things I think I'm using are marked as supported on both GPUs I'm testing on (an NVidia mobile GPU and an embedded Intel GPU). It could still be something in that category, but I've exhausted seeing any differences.

I've used vertex declarations also… and the effect is identical.

Cheers, Tim

This topic is closed to new replies.

Advertisement