(11-12-2012, 09:32 AM)rpglord Wrote:(11-12-2012, 09:00 AM)NaturalViolence Wrote: Incorrect. I have seen VPS > FPS happen with the default settings.
Usually it will only happen if game is coded to have lower fps.
For example,we all know that not all games run at 60 fps.A lot of games runs at 30 fps.
In that case, it will be 30 FPS / 60 VPS if computer is able to run the game at fullspeed.
But its fixed ratio of 1:2,game is programmed to run that way.
What I am talking here is "desynchronizing" two threads in a way, ( skid explained that true desynchronizing is imposible since we would get a black screen )
Basically letting the game code take care of sync issues intead forcibly slowing down jit/jitil recomplier.
Lets take my example where VPS>FPS by game design.
Now instead only allowing it to be 30/60 or 29/58 or 28/56 etc... allowing to for example be 29/60 20/60 15/60 etc..you get the point
I am not sure what logic they use to sync GPU and CPU, from 60 CPU requests only 30 is processed by the GPU in most games
What would help CPU sending 60 frame render requests per host if the GPU only can process 15 requests?
Next cpu cycle of requests will have other new 60 requests, then what happens with these other 15 frames(if the game runs at 30FPS) or 45 frames(if the game runs at 60FPS) of each cycle? they are dropped.
Do you like to play games with such huge framedrop?
That's why they sync vps with fps, in order to process all frames even if it gets slower.
otherwise
how could GPU process 60/fps if the CPU is only sending 20/vps?
like neobrain said, from where the data will come from?
AMD FX-6300 Six-Core Processor OC 4.5 GHZ
Gtx580LE 3GB RAM
System RAM 18GB
Dolphin 4.0-1843
OS Archlinux x86_64
Gtx580LE 3GB RAM
System RAM 18GB
Dolphin 4.0-1843
OS Archlinux x86_64
