We use SSE for the PPC float handling. The GC PPC has SIMD with 2-float width, but else we don't vectorize ourself. Our Jit doesn't have any optimization passes, it's an one-pass compiler from PPC to x64.
We use some special instructions up to AVX, but they don't speed up much. The biggest performance hit is not vectoring but because of strange PPC behavior. If you want to read a bit more about this weird behavior, start on page 6.
There are plans to implement a LLVM based jit, but it's a very long term task and right now, nobody knows if it will be faster at all...
For the measurement, I just hope to see one function with a much higher load on the ubuntu builds. I don't want to belive this is a general slowdown over all called C functions...
We use some special instructions up to AVX, but they don't speed up much. The biggest performance hit is not vectoring but because of strange PPC behavior. If you want to read a bit more about this weird behavior, start on page 6.
There are plans to implement a LLVM based jit, but it's a very long term task and right now, nobody knows if it will be faster at all...
For the measurement, I just hope to see one function with a much higher load on the ubuntu builds. I don't want to belive this is a general slowdown over all called C functions...
