OpenGL – Delay Rendering and Mobile Point Light Source

I know that there are several threads about the same problem on the Internet, but I did not get these help because my implementation is different.

I will The colors, normals, and depths in the view space are rendered as textures. Second, I bind the texture with a full-screen quad and calculate the lighting. The direction light seems to work fine but the point light source is moving with the camera.

I share Corresponding shader code:

Lighting step vertex shader

in vec2 inVertex;
in vec2 inTexCoord;
out vec2 texCoord;
void main() {
gl_Position = vec4(inVertex, 0, 1.0);
texCoord = inTexCoord;
}

Lighting step fragment coloring

float depth = texture2D(depthBuffer, texCoord).r;
vec3 normal = texture2D(normalBuffer, texCoord).rgb;
vec3 color = texture2D(colorBuffer, texCoord).rgb;

vec3 position;
position.z = -nearPlane / (farPlane-(depth * (farPlane-nearPlane))) * farPlane;
position.x = ((gl_FragCoord.x / width) * 2.0)-1.0;
position.y = (((gl_FragCoord.y / height) * 2.0)-1.0) * (height / width);< br />position.x *= -position.z;
position.y *= -position.z;

normal = normalize(normal);
vec3 lightVector = lightPosition .xyz-position;
float d ist = length(lightVector);
lightVector = normalize(lightVector);

float nDotL = max(dot(normal, lightVector), 0.0);
vec3 halfVector = normalize( lightVector-position);
float nDotHV = max(dot(normal, halfVector), 0.0);

vec3 lightColor = lightAmbient;
vec3 diffuse = lightDiffuse * nDotL;
vec3 specular = lightSpecular * pow(nDotHV, 1.0) * nDotL;
lightColor += diffuse + specular;
float attenuation = clamp(1.0 / (lightAttenuation.x + lightAttenuation.y * dist + lightAttenuation .z * dist * dist), 0.0, 1.0);

gl_FragColor = vec4(vec3(color * lightColor * attenuation), 1.0);

I send the light as a uniform To the shader:

shader->set("lightPosition", (viewMatrix * modelMatrix).inverse().transpose() * vec4(0, 10, 0, 1.0 ));

viewmatrix is ​​the camera matrix, and modelmatrix is ​​just the identifier here.

Why is the point light source translated by the camera instead of the model?

Any suggestions are welcome!

Except for Nobody’s comment, all vectors you calculate must be standardized, and you must make sure They are all in the same space. If the view space position is used as the view vector, the normal vector must also be in the view space (must be transformed by inverting the model view matrix before writing the G buffer in the first pass). And The light vector must also be in view space. Therefore, you must transform the position of the light source by the view matrix (or model view matrix, if the light source position is not in world space), not its inverse transposition.

shader->set("lightPosition", viewMatrix * modelMatrix * vec4(0, 10, 0, 1.0));

Edit: For directional light, if you specify the light direction as The direction of the light (such as vec4(0,1,0,0) of the light pointing to the -z direction), the reverse transposition is actually a good idea.

I know there are several threads about the same problem on the Internet, but I did not get these help because my implementation is different.

I render the color, normal and depth in the view space For the texture. Second, I use a full-screen quad to bind the texture and calculate the lighting. The direction light seems to work fine but the point light source is moving with the camera.

I share the corresponding shader code:

< p>Lighting step vertex shader

in vec2 inVertex;
in vec2 inTexCoord;
out vec2 texCoord;
void main() {
gl_Position = vec4(inVertex, 0, 1.0);
texCoord = inTexCoord;
}

Lighting step fragment shader

float depth = texture2D(depthBuffer, texCoord).r;
vec3 normal = texture2D(normalBuffer, texCoord).rgb;
vec3 color = texture2D(colorBuffer, texCoord).rgb;

vec3 positi on;
position.z = -nearPlane / (farPlane-(depth * (farPlane-nearPlane))) * farPlane;
position.x = ((gl_FragCoord.x / width) * 2.0)-1.0 ;
position.y = (((gl_FragCoord.y / height) * 2.0)-1.0) * (height / width);
position.x *= -position.z;
position .y *= -position.z;

normal = normalize(normal);
vec3 lightVector = lightPosition.xyz-position;
float dist = length(lightVector);< br />lightVector = normalize(lightVector);

float nDotL = max(dot(normal, lightVector), 0.0);
vec3 halfVector = normalize(lightVector-position);
float nDotHV = max(dot(normal, halfVector), 0.0);

vec3 lightColor = lightAmbient;
vec3 diffuse = lightDiffuse * nDotL;
vec3 specular = lightSpecular * pow (nDotHV, 1.0) * nDotL;
lightColor += diffuse + specular;
float attenuation = clamp(1.0 / (lightAttenuation.x + lightAttenuation.y * dist + lightAttenuation.z * dist * dist), 0.0, 1.0);

gl_FragColor = vec4(vec 3(color * lightColor * attenuation), 1.0);

I send the light as a uniform to the shader:

shader->set("lightPosition ", (viewMatrix * modelMatrix).inverse().transpose() * vec4(0, 10, 0, 1.0));

viewmatrix is ​​the camera matrix, and modelmatrix is ​​just the identifier here.

Why is the point light source translated with the camera instead of the model?

Any suggestions are welcome!

Except for Nobody’s comment, all vectors you calculate must be normalized, and you must make sure they are all in the same space. If you use view space The position is used as the view vector, the normal vector must also be in the view space (it must be transformed by inverting the model view matrix before writing to the G buffer in the first pass). And the light vector must also be in the view space. Therefore, you The position of the light source must be transformed by the view matrix (or the model view matrix, if the light source position is not in world space), not its inverse transposition.

shader->set( "lightPosition", viewMatrix * modelMatrix * vec4(0, 10, 0, 1.0));

Edit: For directional light, if you specify the light direction as the light direction (such as the light pointing to the -z direction) vec4(0,1,0,0)), the reverse transposition is actually a good idea.

Leave a Comment

Your email address will not be published.