Dolphin, the GameCube and Wii emulator - Forums

Full Version: Strange results with personal CPU benchmark
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Recently I have noticed a strange trend with my CPU benchmark and Nvidia vs Intel GPU. I have been doing this test for some time, but up until recently it's always had consistent results of: Nvidia GTX 970 dominates HD 630 (HD 4000 previous) integrated graphics when using minimum visual settings. I keep IR set to native, all bonus goodies off, v-sync off, and unlimited game speed to let the emulator go crazy and see how far my hardware can go.

Typically, the 970 always generated more max frames per second than the integrated GPU. However, as of recent, I am finding the Intel GPU to have a significantly higher max framerate than the 970. I find this to be very odd.

Here's my save state for Metroid Prime 1 GC NTSC where I benchmark both GPUs: http://www.mediafire.com/file/mnmclgi5ns...GM8E01.rar

I cannot explain this. It's not any particular APIs as I get the same results in all of them. It's not the drivers, I've DDU and reinstalled from scratch. I honestly have no idea why the GTX 970 is underperforming in the same scenario as the HD 630 is currently. If anyone could try this save state with the above settings (most important is uncapped framerate and 1xIR) and then report their max framerate, I'd really appreciate it.

Here's my max FPS on Intel HD 630:
http://i.imgur.com/1wRYbIJ.png

I don't have screenshots of it right now, but my 970 hits max 297 fps in the same spot.
It could be that the card isn't kicking into high performance mode or is overheating. Check your temperature and clock speeds when you're running a game and see post it here.
Definitely not overheating, running a custom fan curve that does minimum 30% fan duty cycle and ramps up very high as it gets near 70c. Not to mention it's an ACX EVGA cooler, so plenty cool.

I thought it was the clock speed too, but it isn't. Even setting it to prefer maximum performance, it still doesn't see the performance that the Intel card hits. It's definitely a distinction in driver efficiency where the CPU reaches it's limits much faster with the Nvidia driver than it does with Intel.

Just want someone else to replicate these results so I can know it isn't something on my end.