Dolphin, the GameCube and Wii emulator - Forums

Full Version: Hardware Discussion Thread
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
About GTX 260 vs GTX 750 TI
GTX 750 TI (stock) has lower amount of memory bandwidth ...
The more bandwidth that you have then the better it will handle higher resolutions and levels of AA in Dolphin
Maybe GPU Boost 2.0 will boost the memory clock -> Higher memory bandwidth as a result but it won't reach 111.9GB/sec(GTX 260) unless you overclock GTX 750 TI
DatKid20 Wrote:Nope. The Tegra architecture was just merged with the Desktop Architecture. Kepler will be the first desktop GPU architecture in mobile devices. I don't know why they weren't merged before but Nvidia themselves made the roadmap where Kepler is the first desktop GPU architecture on mobile devices.

While you are technically correct it's important to look at the big picture here.

While the microarchitecture of tegra is technically unique almost every part of it was borrowed from desktop GPU architectures that they developed. It's a mixture of G70, G80, and G100 parts. The ALUs they use for example are almost identical to the G70 ALUs. The cache system is similar to G100. The reason they didn't fully merge earlier is because the small die size of a mobile SoC greatly restricts their transistor budget which forced them to make cuts. That and the fact that mobile games graphics engines are designed using older programming models for obvious reasons. As time goes on transistor budgets grow as transistor size decreases. This allows GPU electronic engineers to add more "fat" to the design to make it more flexible. The "fat" (new features at the metal layer) has a smaller impact on overall transistor count then it would have had last generation. So it needs to be added gradually once its effect on transistor count becomes acceptable. This results in a to-do list that the engineers over at nvidia use. Every time they develop a new architecture they pick specific features from the list based on current market trends to add and save the rest for future architectures. Over time as GPUs get faster programming restrictions have more and more of an impact on the end result compared to raw throughput. This is why GPU architectures gravitate towards adding more and more "fat" as time goes on. But it has to be done in a controlled manner. Too much at once will use up too much of the transistor budget and kill efficiency. This results in a different optimal point for every chip based on the projects transistor budget. Which the engineers spend a great deal of time determining. Smaller chips have a lower transistor budget and therefore have to have less fat. If you take a look at the docs for nvidias geforce ULV (the microarchitecture used for the GPUs in tegra) you will see that they basically started with the G70 architecture based on this analysis. But due to differences in mobile graphics APIs and low memory bandwidth availability due to limited space on the circuitboard for memory circuits (these are intended to be used with smartphones after all) changes had to be made. Many of these changes were taken directly from other architectures that they had already developed because it saves a lot of time and development resources.

Now that mobile GPUs have gotten faster and mobile games more advanced it's time to add some more fat to give mobile GPUs more flexibility. So to save development resources they are just reusing kepler since it already fits the optimal parameters of their design goals. But you will notice that this is happening at the same time that desktop GPUs are moving to maxwell. They plan to move tegra to maxwell at around the time that desktops move to volta. And so on and so forth. It's not a true merger. Mobile GPUs will continue to remain behind and use outdated technology for the exact same reasons they designed geforce ULV in the first place. Smaller chips = smaller transistor budget = less fat to reach optimal parameters. Nothing has really changed other than the fact the devs can now port games more easily and nvidia can save development resources by reusing desktop microarchitectures. The reason they didn't do this before is because mobile game devs weren't asking for this level of flexibility at the time (this made them gravitate towards using an older ALU design to maximize performace/power ratio at the expense of flexibility) but they still needed some of the optimizations made in newer architectures due to the limitations of smartphone hardware/software design. So they came up with a hybrid design that they no longer need to use.

Maxwell really has nothing to do with any of this. Like I said tegra borrows from the desktop architectures, not the other way around. It's always been that way and it always will. All GPUs for all platforms are developed with power efficiency in mind. Just with a different power budget and transistor budget which requires different trade offs to made to reach that optimal level of efficiency.

admin89 Wrote:The more bandwidth that you have then the better it will handle higher resolutions and levels of AA in Dolphin

This widely held belief on the forums is illogical and is the result of a misinterpretation of one of my threads. I am willing to bet that a GTX 750 TI would beat a GTX 260 C216 in dolphin despite the lower memory bandwidth. Remember that due to caching improvements and overall improvements in pipeline efficiency newer GPU microarchitectures don't need as much memory bandwidth to achieve the same pixel throughput in most shaders compared to older architectures.
(02-06-2014, 12:00 PM)garrlker Wrote: [ -> ]50-100 bucks. Preferably closer to 50. No preference on pressure, not loud, and gaming/programming.
$58+shipping on Monoprice. Save $15 with promo code BONUS15 for a limited time.
(02-28-2014, 07:29 PM)lamedude Wrote: [ -> ]
(02-06-2014, 12:00 PM)garrlker Wrote: [ -> ]50-100 bucks. Preferably closer to 50. No preference on pressure, not loud, and gaming/programming.
$58+shipping on Monoprice. Save $15 with promo code BONUS15 for a limited time.
Thank you!!!
Out of stock, $70 with no promo on amazon. Damn you capitalism!
It doesn't even exist on Amazon.co.jp
Perhaps , i should buy Razer BlackWidow in the near future (2 months or so) . The standard version is pretty cheap (95$) but i probably get the ultimate version
I'm thinking of upgrading my 680 to a 780Ti. I am not able to play a lot of games at 1080p 4xAA, and some games have trouble running even with fast depth calculation on. I'm not sure, however, if I should wait for the high-end maxwell and/or 790? I could probably survive a year without upgrading.
I suggest waiting for the 800s. But frankly, you'd be better served by the CPU arch after broadwell.
Yeah a 680 should not be producing a bottleneck with those settings. Either your card is throttling or your cpu is the bottleneck.
(03-07-2014, 08:23 AM)NaturalViolence Wrote: [ -> ]Yeah a 680 should not be producing a bottleneck with those settings. Either your card is throttling or your cpu is the bottleneck.

Well, unless the CPU has some part in rdndering and/or applying AA, it's definitely not that (as lowering the res allows me to run almost any game fullspeed, except maybe Metroid Prime with EFB to RAM).

And I don't think it's throttling... Usage is shown as 100% (using Open Hardware Monitor). And in fact, running most PC games - such as Skyrim - at max settings (including maxing everything I could in the Nvidia control panel) I can run easily at 80 or so FPS...