• Login
  • Register
  • Dolphin Forums
  • Home
  • FAQ
  • Download
  • Wiki
  • Code


Dolphin, the GameCube and Wii emulator - Forums › Dolphin Emulator Discussion and Support › General Discussion v
« Previous 1 ... 150 151 152 153 154 ... 367 Next »

Mac OS X GPU selection (use offline GPUs)
View New Posts | View Today's Posts

Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Thread Modes
Mac OS X GPU selection (use offline GPUs)
01-09-2014, 11:40 AM
#1
kode54
Unregistered
 
I would like to be able to select my offline GPU to render Dolphin, as it seems to be significantly faster at that than my primary gaming GPU. I have an Nvidia GeForce GTX 670 which I purchased last year, which seems to perform a lot worse on OS X than on Windows or Linux, while my new Radeon R9 270X manages to run Twilight Princess (GCN) at full speed in most areas. On the other hand, the GTX 670 manages to run games like BioShock Infinite at the High preset at full speed, while the Radeon slows down to a crawl just looking out into the rain in the first scene of the game.

I would like to be able to use my R9 270X as an offline GPU, that is, rendering with no displays attached, to a framebuffer on the GPU which has displays attached. That way, I can hopefully get the best performance out of Dolphin in OS X, while using the GTX 670 for most other games and otherwise mostly using the Radeon for OpenCL processing.

Of course, I suppose this sort of functionality is useless for all other ports, and also useless on real Macs, which usually only have one GPU, or identical GPUs, or a combination of integrated and discrete GPUs where the OS will default applications to using the discrete GPU already.
Reply
01-09-2014, 03:35 PM
#2
pauldacheez Offline
hot take: fascism is bad
*******
Posts: 1,527
Threads: 1
Joined: Apr 2012
Hi, guy from #higan. Why not check out #dolphin-emu and #dolphin-dev as well?

Dolphin does actually allow you to choose a GPU on Windows and Linux; it's likely disabled on OS X for the reason you mentioned. I don't think any of the developers would be willing to port it over, either, as it'd require them to figure out how to specifically select an adapter when creating an OpenGL context on OS X, and none of them actually use OS X aside from comex.
<@skid_au> fishing resort is still broken: http://i.imgur.com/dvPiQKg.png
<@neobrain> dafuq
<+JMC47> no dude, you're just holding the postcard upside down
----------------------------------------
<@Lioncash> pauldachz in charge of shitposting :^)
Website Find
Reply
01-09-2014, 07:48 PM (This post was last modified: 01-09-2014, 07:52 PM by degasus.)
#3
degasus Offline
Developer
**********
Developers (Some Administrators and Super Moderators)
Posts: 1,827
Threads: 10
Joined: May 2012
kode54: yeah, such a feature is possible, but imo it shouldn't be done in userland. All we could do is to readback the framebuffer once per frame and to display it on the other gpu. Sounds fine, but this is a huge overhead because of two reasons: First, the readback of this framebuffers usually stalls the driver and doesn't allow to async swapping frames. Second, in userspace, we can't share buffers between both drivers, so there must be three copys: vram of your amd gpu into main memory (by your gpu), memcpy from main memory controlled by the amd driver into main memory controlled by your nvidia driver (all by cpu), and in the end another gpu based copy into vram of your nvidia gpu. As a full hd stream usually has about 500MB/s, this memcpy matters a lot.

pauldacheez: We don't support chosing a gpu, we only support to give a hint on d3d which gpu _should_ be used. That's why it offen fails on optimus laptops :-(

Edit: I've missed to say how it should be done...

The usual way to handle dual gpu systems is handle all of this in the driver itself. eg optimus on windows or dri_prime on linux are such frameworks. But I don't know if osx supports this, maybe you'll find something with optimus+osx?
The os _is_ able to share buffers, so they don't need the cpu based memcpy. As the displaying gpu often only has shared memory, the second gpu based memcpy can also be skiped. And a nice implemention of the first gpu based memcpy is also possible as almost every gpu has parallel memory streaming units.
Find
Reply
« Next Oldest | Next Newest »


  • View a Printable Version
  • Subscribe to this thread
Forum Jump:


Users browsing this thread: 1 Guest(s)



Powered By MyBB | Theme by Fragma

Linear Mode
Threaded Mode