zfreeze testing - helpful with Oculus Rift? - [Large Images] - BlizzardToaBreeze - 12-16-2014
I recently got an Oculus Rift and have been playing around with the 4.0-3628 (unofficial) revision that supports it. Since the zfreeze issue is related to depth, I thought perhaps I could assist anyone working on zfreeze. I played around with the first level, and made an MS paint pic of where the items on-screen are located -- not sure if this info will be helpful, so didn't put too much effort into it, but I'd be happy to give anyone whatever info they might need, whether that's screenshots, write-up, etc.
RE: zfreeze testing - helpful with Oculus Rift? - [Large Images] - MayImilae - 12-16-2014
The developers have already learned how the depth system works by studying what the game requests from Dolphin while running. Essentially, it is blending a skydome and geometry together to form the sky, and using early depth fail as a hack to build the scene. Remember that the skydome is following the player at all times, and all games have to hack around with depth to get the skydome behind the geometry of the level. But with rogue leader and rebel strike, there are also planets and other objects present in the "background", and the game has to know which 3D geometry is background and which is foreground without overdraw problems (commonly called z-fighting). Since Dolphin doesn't have zfreeze support, the hack does not function correctly and the skydome is rendered in it's actual place, appearing as extremely short draw distance (it's still drawing, but you can't see it cause the sky is now in the way).
RE: zfreeze testing - helpful with Oculus Rift? - [Large Images] - BlizzardToaBreeze - 12-17-2014
Cool -- for my own understanding, would you mind explaining in a little more detail? I made a diagram that reflects what I saw tonight when I used the freelook camera, to put the camera "ahead" of the skymap while the game was running. Once the camera was at its new position, I took one screenshot looking straight ahead (you can see lasers), and one screenshot looking to the right.
The red triangles are the X-wings.
"1" and "2" are camera positions:
The camera at the start of the game is "1".
The camera when I took the screenshots is "2".
A, B, and C, are areas that are rendered.
"A" is the skymap. It is like a bubble, whose interior is a picture of stars + an orange planet. It moves as the X-wings move.
"B" is the area of the Death Star that gets rendered when you fly close to the surface.
"C" is like "B", but only visible once the camera is moved to position 2. It is visible no matter how close you fly to the surface, and much larger than "B".
Do you mind re-explaining what's going on, using the diagram as a reference? That would be so helpful.
RE: zfreeze testing - helpful with Oculus Rift? - [Large Images] - pokemontrainer - 12-30-2014
I'll post a copy of Phire's explanation of zfreeze. I hope it's okay that I post it here:
Quote:Phire
Uses for zfreeze
The original Intention
Used by: Mario Power Tennis, Super Mario Strikers
zfreeze was designed as a way to eliminate zfighting when rendering decals instead other hacks like OpenGL's glPolygonOffset(), but the developers never really use it for that. I suspect it's just too expensive, requiring a new drawcall for every set of decals on a different triangle and developers just manually bias vertices instead.
Going through the list of fifologs which jmc47 collected, there is exactly two games (Mario Tennis and Mario Strikers) which uses zfreeze in it's intended decal rendering mode. Mario Strikers uses it for rendering the shadows onto the field and Mario Tennis uses it for rendering the tennis court lines. But Mario Tennis uses other zfreeze based tricks for it's shadows (which I'll cover below.) so Super Mario Strikers is the only game which can be fixed with that kind of trickery.
Depth override
Used By: Rogue Squadron 2/3, Mario Golf: Toadstool Tour, Blood Omen 2
Most famously used by Rogue Squadron's skyspheres, which are rendered close to the player and zfreeze is used to override the depth and project it out behind all other objects to the zfar plane. This is essentially the same as putting depth = 1.0 in a fragment shader (which is what my hack did), except that in the gamecube this is done triangle setup and early z culling still happens. Factor 5 used this method because putting the skysphere in the distance would take up a huge chunk of the zbuffer range (due to Factor 5 using Hardware Anti-aliasing, they were limited to a 16bit zbuffer) and rendering the skysphere first with zbuffer disabled would cause too much overdraw.
I'm not exactly sure why the other games use zfreeze for doing depth overrides, but they both lock different objects to both the zfar and znear planes.
EA shadows
Used By: Most EA sports games, Mario Power Tennis, Need For Speed: Hot Pursuit 2
Shadows are one of the harder things in 3d graphics, many methods have been developed for dynamic shadows over the years and they all have various tradeoffs. Selection of a shadowing technique depends a lot on the capabilities/performance of the hardware. Doom 3's famous stencil volume shadows produce the best looking results for sharp shadows, but modern hardware isn't optimised for it's excessive stencil operations so most modern games use shadow maps, which modern hardware is really good at doing (but the resolution is generally limited, resulting in pixelated shadows)
The gamecube doesn't have a stencil buffer so it can't do stencil volume shadows. It can kind of do shadow mapping (self-shadowing in Rogue Squadron, shadows in Luigi's Mansion) but most games use other methods.
Most games that I've looked into appear to use a hybrid between planar projection shadows and shadow mapping. Taking advantage of the cheap hardware vertex transformations and cheap framebuffer to ram copies, they render a character or object from the prospective of the light into a framebuffer with all black polygons. The resulting black and white shadow mask is copied to a texture which is carefully stretched across the level geometry with alpha blending to create the illusion of a shadow.
But EA sports games use the older method of pure projection shadows, where the shadow object is projected onto the floor in software (which is easy because the floor of sports games is completely flat) and rendered on the floor. This works fine if you want a pure black shadow, but generally you want an alpha blended shadow, which causes issues when polygons are overlapping. Either you get parts of the shadow which are blended twice, or you get zfighting. Normally the correct solution is render the shadow to the stencil buffer and blend each shadow pixel just once.
But the gamecube doesn't have a stencil buffer. Instead these games enable zfreeze, which ensures that each pixel on the screen will always have an identical depth in the zbuffer if rendered to twice. Then it changes the depth compare method from the usual less than or equal to less than, so each pixel of the shadow can only possibly be drawn once. This essentially creates a 1bit stencil buffer in the depth buffer.
I though Factor 5's use of zfreeze to preserve their limited zbuffer precision was pretty cool, but this shadow method used by EA is absolutely genius.
Edit: On second thought, stencil volume shadows might actually be possible. The alpha buffer with blend logic operations can also be used to emulate a stencil buffer. It supports xor, which is technically enough to implement stencil volume shadows.
|