RaverX3X Wrote:This is so true on so many diffrent levels...
i do know what your getting at...
Remember the wii is not a modern pc game and or modern hardware.. Pushing 4k on your pc to play wii games is not really that gpu intensive as apposed to a pc game... There is diffrent factors that play in 4k gameing or even 1080p gameing then just gpu power or cpu power..
One you have to look at base resolution of game and screen complexity of the whats being displayed on screen.
The wii's native resolution at the time was 480i enhanced provided you used component video..
also remember increasing resolution just effects gpu load while changing frame rate effects cpu load..
increasing resolution just changes how many pixels are needed on screen to render the target image while the cpu still needs to do the caculations to render x frames @ target resolution.
The "frame time" is the time it takes to execute a single frame, and is generally expressed in milliseconds. At 30 fps, developers have one-thirtieth of a second, or 33.33 milliseconds, to render each frame. Doubling the frame rate to 60 fps cuts the frame time in half to one-sixtieth of a second, or 16.67 milliseconds. It takes time to render everything on the screen — objects, particle effects like explosions, visual effects like antialiasing and more — so whatever the target frame rate is, the total rendering time can't exceed the corresponding frame time.
to factor how much vram you would need for the wii game there is 2 ways to do this 1 would be to be at internal IR and calculate your desktop resoultion.
height x width = pixel's per screen
devide by bit (your color bit) ÷ (half color bit) = (how many bits per pixel make up the color)
take devided number times pixel screen = total bit-rate you need to make up screen now x that by your target frame rate= Total gpu vram needed to make up screen minus complexity.. (not all scenes are the same..)
the second way to do this is use step 1 to figure out your vram then you will need to do this again for target IR rendering because your target IR is different then your screens own pixel count.. pretty much its going to be 2 3 4 x the amount you calculated for 1x ir
This is just for a console game it gets alot more complex with pc games.. As diffrent games have different screen complexities different style shadier effects different target fps so on.
TLDR Consoles even tho might be harder to emulate then a pc game are not as GPU heavy as they are CPU heavy..
Also note if you plan on doing 4k gameing with 4way sli i suggest to use 3 in sli and the 4th as a dedicated phsyx card..
Note the number you get is going to be for 1 frame... so you need to multiply your total end number x 30 frames or 60 frames per second thats the total VRam needed over a 30 frame time or 60 frame time.
I don't have time to sort through that mess right now and nitpick all the things that are either completely irrelevant or wrong. But I will point out two corrections really quick.
Framerate has nothing to do with vram usage. Games do not hold onto every frame they render for an entire second then dump them all. That would be stupid on so many levels.
Using the 4th card for dedicated physx will result in a performance loss for >99% of all PC games since very few PC games use GPU accelerated physx.
Now I remember why I stopped coming to this subforum.
"Normally if given a choice between doing something and nothing, I’d choose to do nothing. But I would do something if it helps someone else do nothing. I’d work all night if it meant nothing got done."
-Ron Swanson
"I shall be a good politician, even if it kills me. Or if it kills anyone else for that matter. "
-Mark Antony
-Ron Swanson
"I shall be a good politician, even if it kills me. Or if it kills anyone else for that matter. "
-Mark Antony