After going at it for a while I would say there's been improvement, going from previous drivers to now, but slowdown is still evident (with emulation at 100%) where I wasn't getting it with my older, inferior GPU.
Perhaps, right now, ATI is the way to go as far as this generation goes... especially considering the prices and the unrefined compute capabilities of the Nvidia 600 series.
Users of both the 7xxx series and 6xx series cards have reported similar performance issues. Check to see that your GPU is not running at its idle clock speeds.
Quote:Perhaps, right now, ATI is the way to go as far as this generation goes...
I wouldn't do that if I were you. Both architectures are radically different then their predecessors and have therefore created a lot of kinks to iron out in the new drivers but the drivers for geforce 600 are in much better shape right now than hd 7000.
Quote:especially considering the prices and the unrefined compute capabilities of the Nvidia 600 series.
The prices are fine, I have no idea what you're talking about as far as that goes.
The compute capabilities are a let down but we all expected it as soon as they mentioned that it would be a vliw architecture with 1/24th DP throughput. If compute performance is a really big issue for you get a geforce 500 series card. HD 7000 series cards are the best performance wise for compute applications right now but because they don't support CUDA, which the majority of compute applications use they are unfortunately not a valid option for most people in that market (unless you're absolutely sure that none of the applications you're trying to hardware accelerate use CUDA).
[slightly related rant]
Everyone bitched about the heat and power consumption of fermi and how inefficient it was in performance per watt but completely ignored its massive advantage in openCL, directcompute, and CUDA. Now they designed an architecture that is optimized towards energy efficiency by doing literally the only things that they can do to achieve this, reducing control hardware and minimizing the use of 64 bit data paths. Yet now everyone is ignoring these achievements and bitching about compute performance. You can't have both people! Vliw is better for gaming, superscalar is better for compute.
Grr I can't seem to shake off the ATI name.
Hmm.. so people are having AMD-related issues with Dolphin too? If my memory serves me right I think I do recall this being the case as far as this forum goes. That is also quite unfortunate. At least we can expect things to get better from this point rather than worse considering the immaturity of Kepler and GCN as you've already said. Still there's no way I was ever thinking about swapping out my 670! I just wondered if I should have bought a 7970 instead in the first place 'perhaps.'
Checking the prices offered from both sides I see that my card has dropped a lot in price quite rapidly. Still when I wrote that comment it wasn't long after the 660ti had been released in EU against the likes of the 7950, price for price, and AMD put out their recent CAP profiles which have provided sizeable performance gains for customers now. The 670 and 7970 are priced comparatively here but with overclocking and AMD's CAP one is likely to go a whole lot further with the latter card.
I'm not really concerned much with the compute architecture of today's cards. The reason I mentioned it is because of AMD's current push in trying to take advantage of GCN among game developers: Sleeping Dogs' method of providing exhaustive AA via particular directcompute exercises and Epic's belief that things are generally heading this way for gaming when talking about UE4.
It makes me wonder if Nvidia will stick to it's principals concerning its design of GK104/106 for future cards.
Also, I just installed the newly released WHQL Forceware a few hours ago! I can try these out.