As asked in my thread:
Why is r3.0-784 so fast, because all previous revisions are emulating most games (for example mario galaxy 1/2) so awfully slow on my computer ..
And I wanted to let you know, that Donkey Kong Country Returns isn't working anymore .. I wanted to test it because it's completely running with 45 fps only ..

In a technical sense, it relieves some load on the CPU by off putting the vertex buffer to the GPU's dedicated buffer (instead of using an array on the CPU).
In a non-technical sense, it messes with some graphics stuff.
What is wrong with DKCR? Can you explain further?
That guy who posted this revision said something about that he's going to work at "OLG"
What does this mean and is Dolphin getting even faster through this?
Well Donkey Kong Country Returns doesn't even boot .. When you start the game the screen keeps dark ..
DKCR works fine on 3.0-796.
(10-30-2012, 09:23 AM)DefenderX Wrote: [ -> ]DKCR works fine on 3.0-796.
Not on my computer .. ("fine")
(10-30-2012, 09:18 AM)slax65 Wrote: [ -> ]That guy who posted this revision said something about that he's going to work at "OLG"
What does this mean and is Dolphin getting even faster through this?
Well Donkey Kong Country Returns doesn't even boot .. When you start the game the screen keeps dark ..
By "OLG" I assume you mean "OGL," also known as OpenGL. It is the third backend for Dolphin, and the only one available on Linux and OS X. He is working on speeding that backend up like he did for Direct3D 9 now.
DKCR should work in 796. What are your settings? What revisions, if any, did it work on before?
It works but not more than 45 fps .. The WHOLE game is running at this speed, even the Wii-Screen at the beginning ..
Seriously , you should stop complaining and overclock that CPU already
i7 930 @ 3.4GHz should run DKCR fullspeed with HLE
Quote:In a technical sense, it relieves some load on the CPU by off putting the vertex buffer to the GPU's dedicated buffer (instead of using an array on the CPU).
Wait what?
It reduces stall time by rotating between vertex buffers to allow vertex data to be streamed to the gpu while the previous buffers contents are still being processed by the GPU.
WHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAT?!?!?!
My impression from what the devs said was that they implemented the buffer using arrays on th CPU, and that transferring large blocks of crap from the CPU to the GPU stalled the video thread. What I understood rodolfo's code to be doing is use the GPU's vertex buffers instead of some stupid array on the CPU's end.
Rage quit.