Dolphin, the GameCube and Wii emulator - Forums

Full Version: Mac OS X GPU selection (use offline GPUs)
You're currently viewing a stripped down version of our content. View the full version with proper formatting.

kode54

I would like to be able to select my offline GPU to render Dolphin, as it seems to be significantly faster at that than my primary gaming GPU. I have an Nvidia GeForce GTX 670 which I purchased last year, which seems to perform a lot worse on OS X than on Windows or Linux, while my new Radeon R9 270X manages to run Twilight Princess (GCN) at full speed in most areas. On the other hand, the GTX 670 manages to run games like BioShock Infinite at the High preset at full speed, while the Radeon slows down to a crawl just looking out into the rain in the first scene of the game.

I would like to be able to use my R9 270X as an offline GPU, that is, rendering with no displays attached, to a framebuffer on the GPU which has displays attached. That way, I can hopefully get the best performance out of Dolphin in OS X, while using the GTX 670 for most other games and otherwise mostly using the Radeon for OpenCL processing.

Of course, I suppose this sort of functionality is useless for all other ports, and also useless on real Macs, which usually only have one GPU, or identical GPUs, or a combination of integrated and discrete GPUs where the OS will default applications to using the discrete GPU already.
Hi, guy from #higan. Why not check out #dolphin-emu and #dolphin-dev as well?

Dolphin does actually allow you to choose a GPU on Windows and Linux; it's likely disabled on OS X for the reason you mentioned. I don't think any of the developers would be willing to port it over, either, as it'd require them to figure out how to specifically select an adapter when creating an OpenGL context on OS X, and none of them actually use OS X aside from comex.
kode54: yeah, such a feature is possible, but imo it shouldn't be done in userland. All we could do is to readback the framebuffer once per frame and to display it on the other gpu. Sounds fine, but this is a huge overhead because of two reasons: First, the readback of this framebuffers usually stalls the driver and doesn't allow to async swapping frames. Second, in userspace, we can't share buffers between both drivers, so there must be three copys: vram of your amd gpu into main memory (by your gpu), memcpy from main memory controlled by the amd driver into main memory controlled by your nvidia driver (all by cpu), and in the end another gpu based copy into vram of your nvidia gpu. As a full hd stream usually has about 500MB/s, this memcpy matters a lot.

pauldacheez: We don't support chosing a gpu, we only support to give a hint on d3d which gpu _should_ be used. That's why it offen fails on optimus laptops :-(

Edit: I've missed to say how it should be done...

The usual way to handle dual gpu systems is handle all of this in the driver itself. eg optimus on windows or dri_prime on linux are such frameworks. But I don't know if osx supports this, maybe you'll find something with optimus+osx?
The os _is_ able to share buffers, so they don't need the cpu based memcpy. As the displaying gpu often only has shared memory, the second gpu based memcpy can also be skiped. And a nice implemention of the first gpu based memcpy is also possible as almost every gpu has parallel memory streaming units.