• Login
  • Register
  • Dolphin Forums
  • Home
  • FAQ
  • Download
  • Wiki
  • Code


Dolphin, the GameCube and Wii emulator - Forums › Offtopic › Delfino Plaza v
« Previous 1 ... 5 6 7 8 9 ... 64 Next »

Hardware Discussion Thread
View New Posts | View Today's Posts

Pages (136): « Previous 1 ... 38 39 40 41 42 ... 136 Next »
Jump to page 
Thread Rating:
  • 1 Vote(s) - 5 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Thread Modes
Hardware Discussion Thread
02-19-2014, 03:22 PM
#391
admin89 Offline
Overclocker™ ✓ᵛᵉʳᶦᶠᶦᵉᵈ
*******
Posts: 6,889
Threads: 127
Joined: Nov 2009
About GTX 260 vs GTX 750 TI
GTX 750 TI (stock) has lower amount of memory bandwidth ...
The more bandwidth that you have then the better it will handle higher resolutions and levels of AA in Dolphin
Maybe GPU Boost 2.0 will boost the memory clock -> Higher memory bandwidth as a result but it won't reach 111.9GB/sec(GTX 260) unless you overclock GTX 750 TI
Laptop: (Show Spoiler)
Clevo W230SS : 3200x1800 IPS | i7 4700MQ @ 3.6GHz (Intel XTU + Triple fan mod) | GTX 860M GDDR5 | 128GB Toshiba CFD SSD | 16GB DDR3L 1600MHz
Aspire 715 43G : 1080p 144Hz |  R5 5625U @ 4.3GHz | Nvidia RTX 3050 4GB | 500GB WD SSD  | 16GB DDR4 3200MHz 
Mini PC :: (Show Spoiler)
G3258 @ 4.6GHz | ELSA GTX 750 | Asrock Z87E ITX | 600W SFX 80+ Gold Silverstone + SG06-LITE | Corsair Vengeance 8GB 2000MHz | Scythe Kozuti + Ao Kaze | 45TB 2.5" Ex HDD (in total) , Zelda Gold Wiimote , LE Wii Classic Controller , Gold LE PS3 DualShock , BlackWidow Chroma ,
Now Playing : Xenoblade Definitive Edition on Yuzu - Switch Emu 

 
Find
Reply
02-19-2014, 06:02 PM
#392
NaturalViolence Offline
It's not that I hate people, I just hate stupid people
*******
Posts: 9,013
Threads: 24
Joined: Oct 2009
DatKid20 Wrote:Nope. The Tegra architecture was just merged with the Desktop Architecture. Kepler will be the first desktop GPU architecture in mobile devices. I don't know why they weren't merged before but Nvidia themselves made the roadmap where Kepler is the first desktop GPU architecture on mobile devices.

While you are technically correct it's important to look at the big picture here.

While the microarchitecture of tegra is technically unique almost every part of it was borrowed from desktop GPU architectures that they developed. It's a mixture of G70, G80, and G100 parts. The ALUs they use for example are almost identical to the G70 ALUs. The cache system is similar to G100. The reason they didn't fully merge earlier is because the small die size of a mobile SoC greatly restricts their transistor budget which forced them to make cuts. That and the fact that mobile games graphics engines are designed using older programming models for obvious reasons. As time goes on transistor budgets grow as transistor size decreases. This allows GPU electronic engineers to add more "fat" to the design to make it more flexible. The "fat" (new features at the metal layer) has a smaller impact on overall transistor count then it would have had last generation. So it needs to be added gradually once its effect on transistor count becomes acceptable. This results in a to-do list that the engineers over at nvidia use. Every time they develop a new architecture they pick specific features from the list based on current market trends to add and save the rest for future architectures. Over time as GPUs get faster programming restrictions have more and more of an impact on the end result compared to raw throughput. This is why GPU architectures gravitate towards adding more and more "fat" as time goes on. But it has to be done in a controlled manner. Too much at once will use up too much of the transistor budget and kill efficiency. This results in a different optimal point for every chip based on the projects transistor budget. Which the engineers spend a great deal of time determining. Smaller chips have a lower transistor budget and therefore have to have less fat. If you take a look at the docs for nvidias geforce ULV (the microarchitecture used for the GPUs in tegra) you will see that they basically started with the G70 architecture based on this analysis. But due to differences in mobile graphics APIs and low memory bandwidth availability due to limited space on the circuitboard for memory circuits (these are intended to be used with smartphones after all) changes had to be made. Many of these changes were taken directly from other architectures that they had already developed because it saves a lot of time and development resources.

Now that mobile GPUs have gotten faster and mobile games more advanced it's time to add some more fat to give mobile GPUs more flexibility. So to save development resources they are just reusing kepler since it already fits the optimal parameters of their design goals. But you will notice that this is happening at the same time that desktop GPUs are moving to maxwell. They plan to move tegra to maxwell at around the time that desktops move to volta. And so on and so forth. It's not a true merger. Mobile GPUs will continue to remain behind and use outdated technology for the exact same reasons they designed geforce ULV in the first place. Smaller chips = smaller transistor budget = less fat to reach optimal parameters. Nothing has really changed other than the fact the devs can now port games more easily and nvidia can save development resources by reusing desktop microarchitectures. The reason they didn't do this before is because mobile game devs weren't asking for this level of flexibility at the time (this made them gravitate towards using an older ALU design to maximize performace/power ratio at the expense of flexibility) but they still needed some of the optimizations made in newer architectures due to the limitations of smartphone hardware/software design. So they came up with a hybrid design that they no longer need to use.

Maxwell really has nothing to do with any of this. Like I said tegra borrows from the desktop architectures, not the other way around. It's always been that way and it always will. All GPUs for all platforms are developed with power efficiency in mind. Just with a different power budget and transistor budget which requires different trade offs to made to reach that optimal level of efficiency.

admin89 Wrote:The more bandwidth that you have then the better it will handle higher resolutions and levels of AA in Dolphin

This widely held belief on the forums is illogical and is the result of a misinterpretation of one of my threads. I am willing to bet that a GTX 750 TI would beat a GTX 260 C216 in dolphin despite the lower memory bandwidth. Remember that due to caching improvements and overall improvements in pipeline efficiency newer GPU microarchitectures don't need as much memory bandwidth to achieve the same pixel throughput in most shaders compared to older architectures.
"Normally if given a choice between doing something and nothing, I’d choose to do nothing. But I would do something if it helps someone else do nothing. I’d work all night if it meant nothing got done."  
-Ron Swanson

"I shall be a good politician, even if it kills me. Or if it kills anyone else for that matter. "
-Mark Antony
Website Find
Reply
02-28-2014, 07:29 PM
#393
lamedude Offline
Senior Member
****
Posts: 360
Threads: 7
Joined: Jan 2011
(02-06-2014, 12:00 PM)garrlker Wrote: 50-100 bucks. Preferably closer to 50. No preference on pressure, not loud, and gaming/programming.
$58+shipping on Monoprice. Save $15 with promo code BONUS15 for a limited time.
Website Find
Reply
02-28-2014, 11:12 PM
#394
garrlker Offline
That one guy
***
Posts: 183
Threads: 7
Joined: Feb 2012
(02-28-2014, 07:29 PM)lamedude Wrote:
(02-06-2014, 12:00 PM)garrlker Wrote: 50-100 bucks. Preferably closer to 50. No preference on pressure, not loud, and gaming/programming.
$58+shipping on Monoprice. Save $15 with promo code BONUS15 for a limited time.
Thank you!!!
Gaming Rig
Spoiler: (Show Spoiler)
I5-3570k@4.1Ghz
Gtx 660 TI 3GB Superclocked Edition
20Gb G.Skill DDr3 Ram
1.8 TBs of Storage
Windows 7 64bit
Find
Reply
03-01-2014, 05:10 AM
#395
NaturalViolence Offline
It's not that I hate people, I just hate stupid people
*******
Posts: 9,013
Threads: 24
Joined: Oct 2009
Out of stock, $70 with no promo on amazon. Damn you capitalism!
"Normally if given a choice between doing something and nothing, I’d choose to do nothing. But I would do something if it helps someone else do nothing. I’d work all night if it meant nothing got done."  
-Ron Swanson

"I shall be a good politician, even if it kills me. Or if it kills anyone else for that matter. "
-Mark Antony
Website Find
Reply
03-01-2014, 10:49 AM
#396
admin89 Offline
Overclocker™ ✓ᵛᵉʳᶦᶠᶦᵉᵈ
*******
Posts: 6,889
Threads: 127
Joined: Nov 2009
It doesn't even exist on Amazon.co.jp
Perhaps , i should buy Razer BlackWidow in the near future (2 months or so) . The standard version is pretty cheap (95$) but i probably get the ultimate version
Laptop: (Show Spoiler)
Clevo W230SS : 3200x1800 IPS | i7 4700MQ @ 3.6GHz (Intel XTU + Triple fan mod) | GTX 860M GDDR5 | 128GB Toshiba CFD SSD | 16GB DDR3L 1600MHz
Aspire 715 43G : 1080p 144Hz |  R5 5625U @ 4.3GHz | Nvidia RTX 3050 4GB | 500GB WD SSD  | 16GB DDR4 3200MHz 
Mini PC :: (Show Spoiler)
G3258 @ 4.6GHz | ELSA GTX 750 | Asrock Z87E ITX | 600W SFX 80+ Gold Silverstone + SG06-LITE | Corsair Vengeance 8GB 2000MHz | Scythe Kozuti + Ao Kaze | 45TB 2.5" Ex HDD (in total) , Zelda Gold Wiimote , LE Wii Classic Controller , Gold LE PS3 DualShock , BlackWidow Chroma ,
Now Playing : Xenoblade Definitive Edition on Yuzu - Switch Emu 

 
Find
Reply
03-07-2014, 03:55 AM
#397
teh_speleegn_polease Offline
Misc. Member
**
Posts: 44
Threads: 2
Joined: Aug 2013
I'm thinking of upgrading my 680 to a 780Ti. I am not able to play a lot of games at 1080p 4xAA, and some games have trouble running even with fast depth calculation on. I'm not sure, however, if I should wait for the high-end maxwell and/or 790? I could probably survive a year without upgrading.
Website Find
Reply
03-07-2014, 04:21 AM
#398
MayImilae Online
Chronically Distracted
**********
Administrators
Posts: 4,620
Threads: 120
Joined: Mar 2011
I suggest waiting for the 800s. But frankly, you'd be better served by the CPU arch after broadwell.
[Image: RPvlSEt.png]
AMD Threadripper Pro 5975WX PBO+200 | Asrock WRX80 Creator | NVIDIA GeForce RTX 4090 FE | 64GB DDR4-3600 Octo-Channel | Windows 11 22H2 | (details)
MacBook Pro 14in | M1 Max (32 GPU Cores) | 64GB LPDDR5 6400 | macOS 12
Find
Reply
03-07-2014, 08:23 AM
#399
NaturalViolence Offline
It's not that I hate people, I just hate stupid people
*******
Posts: 9,013
Threads: 24
Joined: Oct 2009
Yeah a 680 should not be producing a bottleneck with those settings. Either your card is throttling or your cpu is the bottleneck.
"Normally if given a choice between doing something and nothing, I’d choose to do nothing. But I would do something if it helps someone else do nothing. I’d work all night if it meant nothing got done."  
-Ron Swanson

"I shall be a good politician, even if it kills me. Or if it kills anyone else for that matter. "
-Mark Antony
Website Find
Reply
03-08-2014, 04:33 AM
#400
teh_speleegn_polease Offline
Misc. Member
**
Posts: 44
Threads: 2
Joined: Aug 2013
(03-07-2014, 08:23 AM)NaturalViolence Wrote: Yeah a 680 should not be producing a bottleneck with those settings. Either your card is throttling or your cpu is the bottleneck.

Well, unless the CPU has some part in rdndering and/or applying AA, it's definitely not that (as lowering the res allows me to run almost any game fullspeed, except maybe Metroid Prime with EFB to RAM).

And I don't think it's throttling... Usage is shown as 100% (using Open Hardware Monitor). And in fact, running most PC games - such as Skyrim - at max settings (including maxing everything I could in the Nvidia control panel) I can run easily at 80 or so FPS...
Website Find
Reply
« Next Oldest | Next Newest »
Pages (136): « Previous 1 ... 38 39 40 41 42 ... 136 Next »
Jump to page 


  • View a Printable Version
  • Subscribe to this thread
Forum Jump:


Users browsing this thread: 1 Guest(s)



Powered By MyBB | Theme by Fragma

Linear Mode
Threaded Mode