Logical core mapping might depend on the OS or the specific CPU generation. For my i5-2557M under OS X, cores 1 and 3 almost invariably have lower reported usage than 0 and 2; no sane OS's task scheduler would schedule significantly more work on one core than another due to heat concerns, but for sets of registers within a core it matters much less, leading me to believe 1 and 3 are the duplicates. I dunno how either mapping would make sense, so it makes sense for it to vary or be arbitrary.
Dolphin access the emulated memory from both threads all the time, so keep care about moving this two threads to different cpu.
In theory, having both threads on different cpus would allow both to run at higher frequencys (that's why the kernel scheduler move the threads between all cores), but we _do_ have cache issues. So both cpus will have to wait often for the other cpu to flush their cache.
Could it be that the CPUs or even the threads devide the GPU power?
so.. dolphin uses 2 threads only and probably can't Access enough GPU power, because it's reserved for the other threads?
No. That doesn't make any sense.
I have found the Problem. The new Direct3D causes the lags.
I tested this
unofficial DX9 Dolphin release and selected Direct3D11 as gfx backend and everything runs fine, now

CosmoCortney: D3D9 was removed and D3D11 was renamed to D3D, so I doubt this is the reason.
oups sry. I wanted to say D3D9 but type d3d11 instead
will d3d9 be back in future?
There's absolutely zero chance DX9 would ever come back to official builds of Dolphin. Less people are gonna legitimately need it (e.g. their GPU's too old to do DX10/GL3) and care about it (because the current D3D/GL backends are constantly being improved and are already faster in some cases) in the future, and time's not flowing backwards.
Seriously, will that fucking horrible backend just die already? Sure, it's fast, but it's buggy as fuck and it's completely incompatible with any of the fixes in tev_fixes_new, which fixes a fuckton of minor and major graphical bugs in *every game* just by using integers for all GPU calculations.
Ok but can we get back to the subject of why on earth this issue exists in d3d11 and openGL but not d3d9?