程序代写代做代考 GPU algorithm Computer Graphics – cscodehelp代写

Computer Graphics
COMP3421/9415 2021 Term 3 Lecture 17

What did we learn last week?
Reflections
¡ñ Reflective objects with:
¡ñ Environment Maps
¡ñ Cube Maps
¡ñ Realtime Cube Maps
Post Processing
¡ñ Framebuffers and Render Targets
¡ñ Kernel Sampling
¡ñ Bloom

What are we covering today?
Shadow Mapping
¡ñ Casting shadows in Polygon Rendering Deferred Rendering
¡ñ Post processing lighting
¡ñ Lights as geometry

Shadow Mapping

Blinn- and Shadows
What’s our current system?
¡ñ Calculating fragments based on angle of light and viewer
¡ñ “Bottom” surfaces of objects correctly receive no light
¡ñ No detection of occluding objects!
¡ñ Surfaces light shouldn’t reach will
still be lit

What Would Do?
Light through rays rather than angle calculations
¡ñ Any rays that hit an intervening object
¡ñ . . . will not reach other objects
¡ñ is still expensive
¡ñ Something more efficient?
¡ñ Use our current rendering tech?

What are we missing?
Information needed for shadows
¡ñ Blinn-Phong is missing detection of intervening objects
¡ñ would offer collision detection of light rays
¡ñ The crucial information:
¡ñ Does the light reach specific objects?
¡ñ What technique can we use to find out how far the light rays can reach?

Render to a Framebuffer
A common work around to replace
¡ñ When we render, we use a depth buffer
¡ñ It tells us how far away the closest visible objects are
¡ñ We want to know whether light is reaching objects . . .
¡ñ Render to a depth buffer from the perspective of the light!

Rendering from the Light’s Perspective
Putting a camera where the light source is . . . We might see a depth map that looks like this (Darker is closer, Brighter further away)

A Depth Map from the Light Source
What Information do we get from this?
¡ñ A depth buffer will show us the closest visible fragments to the viewer
¡ñ From the light source, these are the closest fragments to the light
¡ñ The depth buffer records where the light reaches!
¡ñ We record this depth buffer to a texture called the Depth Map
¡ñ How do we use this while rendering from our main camera though?

Remember Model/View/Projection Transforms?
Transforming between perspectives
¡ñ While rendering a fragment in the main camera . . .
¡ñ Transform that fragment position into the light’s view
¡ñ Sample from the Depth Map for the fragment’s position
¡ñ Compare the two depth values:
¡ð Depth Map
¡ð Current Fragment Depth
¡ñ If the Depth Map sample is lower than the Fragment Depth, the light has
not reached the fragment!

Depth Map Testing
Image credit: learnopengl.com

Shadow Mapping
The process
¡ñ Render to separate depth maps for every light in the scene
¡ñ Prepare a View Transform matrix for every light
¡ñ When rendering a fragment:
¡ð Find the fragment position
¡ð For each light:
¡ö Transform the position into the light’s perspective
¡ö Check depth against the light’s depth map
¡ö If it fails the test, remove the light from the lighting equation
¡ð Calculate lighting for the fragment using whatever lights remain

Analysis of Shadow Mapping
Pros
¡ñ Reasonably accurate model of what light can reach
¡ñ Efficient in comparison to
Cons
¡ñ Rendering the scene an extra time per light (but it’s a simple depth render, not colour at least)
¡ñ Is your Depth map accurate enough? Are there sampling artifacts?
¡ñ How wide can you render? Enough for 360¡ã point light?

Shadow Acne
I don’t know where this name came from!
¡ñ An artifact of depth mapping
¡ñ Depth maps might be on an angle
¡ñ Fragments sampling the depth map
might not get a perfectly correct depth
¡ñ Surfaces are able to cast shadows on
themselves!
Image credit: learnopengl.com

Depth Map Accuracy
Depth Maps have their own texels
¡ñ Depth measurements then have an “area”, however small
¡ñ An exact depth map can have inaccuracies between:
¡ð The part of the depth map texel that is sampled
¡ð and the actual surface depth
¡ñ A surface in full light can cast shade on itself
Image credit: learnopengl.com

Shadow Bias
We can introduce a Shadow Bias to correct this
¡ñ During shadow calculation
¡ñ Move the depth map “into” the object
¡ñ Then the depth map never looks like the object is obscuring itself
¡ñ This is a partial solution that can lead to ” -ing” where the
shadow detaches from the object!
Image credit: learnopengl.com

Over Sampling
Sampling outside the Depth Map
¡ñ Like all textures, the Depth map has 0.0 – 1.0 UV coordinates
¡ñ What does this represent?
¡ñ This point light can shine on the green cube
¡ñ But the object is outside its shadow mapped
frustum!
¡ñ What sampling settings should we use here?

Correcting Over Sampling
We can’t cast shadows if we can’t detect obscuring objects
¡ñ Anything outside our light’s frustum should be lit
¡ñ What value in our depth map keeps things lit?
¡ñ The maximum, furthest depth, 1.0
¡ñ This works, unless our object is further than the far plane of our light’s
frustum (higher than 1.0)
¡ñ We can detect a fragment distance higher than 1.0 though
¡ð If depth map >= 1.0 && fragment distance >= 1.0
¡ð then apply light to that fragment

More Shadows . . .
There’s more we don’t have time for
¡ñ Shadow Mapping tends to cause jagged edges
¡ð Perspective Aliasing
¡ð Projective Aliasing
¡ñ We’d want to smooth them out
¡ñ There are more advanced techniques for Shadow Bias also
¡ñ We can make shadow mapping more efficient by custom fitting the light’s
frustum to the scene

Break Time
A story of a Graphics Engine: CryEngine, by Crytek
¡ñ CryEngine started as an Nvidia tech demo (1999)
¡ð Was known for much longer possible view distances than other engines
¡ñ Far Cry (2004) with Ubisoft
¡ð CryEngine gains a reputation for capabilities to render large, open environments
¡ñ Crysis series (2007-2013)
¡ð “Can it run Crysis?” – Reputation for very steep hardware requirements
¡ð Well known for volumetric lighting/shadows and motion blur
¡ð CryEngine 3 Lighting demo (2010): https://youtu.be/vPQ3BbuYVh8
¡ð CryEngine 3 also pushed features like dynamic vegetation and dynamic caustics
¡ñ Eventually bought by Amazon as Lumberyard (now Open 3D Engine)

Deferred Rendering

Limitations of Forward Rendering (our current lighting)
Too many lights? Too many fragments?
¡ñ Work per fragment is multiplied by the number of lights in the scene ¡ð Many of these lights won’t even affect the fragment
¡ñ Multiple fragments per pixel overwriting each other and wasting resources
¡ñ Inefficient algorithm that also includes wasted work when there are several lights in a scene
¡ñ O(lights * fragments)

What’s the solution?
Deferred Rendering
¡ñ Defers lighting until after a fragment is confirmed to be visible ¡ð Uses framebuffers storing different information
¡ñ Light volumes
¡ð Lights are rendered as geometry, limiting which fragments they affect
¡ð Also treats lights much like other objects, instead of things that affect all objects
¡ñ Effectively: O(lights + pixels)

Deferred Rendering Steps
A system that only lights the visible fragment
¡ñ We do a first render pass with geometry and no lighting
¡ñ Store the information in some frame buffers
¡ð Each buffer is a standard screen-sized, 3 float frame buffer ¡ñ Then do lighting for each pixel

The Geometry Pass
The G-Pass stores information in the G-buffer
¡ñ Four standard buffers:
¡ñ Fragment position
¡ñ Surface normal
¡ñ Albedo (diffuse colour)
¡ñ Specularity (specular colour)
Image credit: learnopengl.com

Information per pixel
What can we do with this information?
¡ñ The Lighting Pass
¡ñ Loop through all pixels
¡ñ Calculate lighting for each pixel like it was a fragment
¡ð G-buffer should have all the information we need
How much has this helped?
¡ñ In complex scenes with overlapping fragments
¡ñ We’ve reduced the number of lit fragments to exactly the number of
pixels

What’s next?
Geometry Pass Issues
¡ñ We’ve deferred lighting
¡ñ But we’re still applying every light to every pixel
¡ñ (still applying lights that might not have an effect)
¡ñ Add a technique to limit which lights apply to which pixels!

Restricting Lights
Turning off some of the lights!
¡ñ We could try to calculate where lights have effect
¡ñ Radius for attenuation of point lights
¡ñ Cones for spot lights
¡ñ But if we put this branching into the shader, our parallel GPUs have issues
¡ñ In large scale parallel calculation like GPU cores
¡ñ They all run the same code in lock step
¡ñ Which means we can’t use if/else to reduce the amount of work!

Light Volumes
Rendering Lights like they’re Geometry
¡ñ In the Lighting Pass
¡ñ Render each light as a sphere (or other simple shapes like cubes)
¡ð Size based on attenuation radius
¡ñ Each light volume will affect certain pixels
¡ñ Calculate the light’s effect on those pixels only
Image credit: learnopengl.com

Light Volumes Details
Some things to think about
¡ñ Rendering Light volumes lowers the number of lighting calculations ¡ð Each light renders a constant number of pixels, not a multiplication of fragments
¡ñ But adds a geometry render pass ¡ð Which is why we use simple objects
¡ñ Need to take into account the visibility of the light spheres
¡ð If you’re inside a sphere, can you see it?
¡ð Need to adjust culling settings

Deferred Rendering
Pros
¡ñ Significantly reduces number of lighting calculations
¡ð Better the more fragments and lights there are
¡ð Particularly good for multiple small lights
Cons
¡ñ Extra render passes
¡ð G-Pass and Lighting Pass
¡ñ Memory usage
¡ð Storing G-buffer and light geometry
¡ñ No differentiation of objects
¡ð Can’t use custom shaders for separate objects
Image credit: c/o learnopengl.com

What did we learn today?
Advanced Techniques using Framebuffers and Render Targets
¡ñ Shadow Mapping
¡ð Rendering Depth from the perspective of the light
¡ð Detecting intervening objects that should cast shadows
¡ñ Deferred Rendering
¡ð Defer rendering until after the visible fragment is decided
¡ð Renders all lighting specific information into a G-buffer
¡ð Use light volumes to limit which fragments are affected by which lights

Leave a Reply

Your email address will not be published. Required fields are marked *