Dolphin, the GameCube and Wii emulator - Forums

Full Version: Dolphin CPU hierarchy [UNOFFICIAL]
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
hell321 Wrote:amd always ahead of intel cpu model support and cpu instruction

Actually historically this is the first time that has happened since the athlon 64. SSSE3, SSE4.1, SSE4.2, and AES-NI have all been supported on Intel cpus for a long time. AMD finally got around to adding them recently with bulldozer.

Regardless it has little impact on the ability to sell the products since very little software out there gains any significant benefit from the more recent extensions. This is because most of the useful instructions were added a long time ago and the more recent extensions require some pretty complicated instrinsic functions that are difficult to implement and only useful for some very rare conditions.

Also smart cache isn't an instruction set extension. It's simply a marketing term used by Intel for a shared L2/L3 cache.

hell321 Wrote:i believe higher cpu ghz speed the better
i don't believe in benchmark cause it run straight line and biased to intel

I don't believe for a second that someone would be passionate enough about cpus to keep up with the latest ISA extensions (even though I can tell that you copied/pasted them from an article) yet not understand the basis for the megahertz myth. This is clearly trolling. So please knock it off.

Nameless Mofo Wrote:Ouch! To be fair, Barcelona was still a good architecture (architecturally an evolution of K8), and competed well with Intel in the big MP (>4 sockets) arena, due mostly to the fact that Hypertransport was better for inter-CPU communication than FSB was, and cache coherency was more efficiently maintained.

Perhaps. But its consumer level derivatives left much to be desired. The clock rates were much lower than brisbane and the average IPC only went up by around 9%. The result was that it was slower than brisbane in just as many benchmarks as it was faster in. Similar to thuban vs. zambezi (except the reverse, higher clock rate but lower IPC).

Nameless Mofo Wrote:But like I said when Intel came out with Conroe/Merom/Woodcrest it left AMD in the dust. I remember Barcelona was the first chip design AMD wrote in Verilog, which was a huge deal. K8 was written in an in-house HDL that AMD had an in-house simulator for.

God I have so many questions to ask you that you may or may not be able to answer due to legal complications.

I've been hearing unsubstantiated rumors for years now that part of the reason AMDs more recent microarchitectures have fallen behind is due to a greater reliance on software libraries vs. hand drawn optimizations compared to earlier microarchitectures (not sure how to phrase this but you know what I mean). Is there any truth to this?

Nameless Mofo Wrote:I do processor validation. We find the bugs before tapeout and before the customers/users do.

Now that is a good job to have! What type of formal education and career experience do you have if you don't mind me asking? How did you find your way into this career?
(06-13-2013, 01:13 PM)NaturalViolence Wrote: [ -> ]God I have so many questions to ask you that you may or may not be able to answer due to legal complications.

I've been hearing unsubstantiated rumors for years now that part of the reason AMDs more recent microarchitectures have fallen behind is due to a greater reliance on software libraries vs. hand drawn optimizations compared to earlier microarchitectures (not sure how to phrase this but you know what I mean). Is there any truth to this?
Yeah, it's a little tricky to talk about such things without getting in hot water. But, if you're referring to logic synthesis vs. hand drawn schematics, I remember at AMD we had a combination approach, hand drawing for timing critical areas and synthesis for other parts. I'm really not as familiar with the back end stuff (synthesis, place & route, timing closure) as the front end. What AMD does now, I don't know. But given their history of layoffs, downsizing etc., it wouldn't surprise me if they did have to rely more on synthesis, as hand drawing schematics and hand place/route is a labor intensive process that requires actual humans, with specialized skills. The other thing that kills AMD (and everyone else) is that they don't have the process advantage and know-how that Intel does. That's really huge.

As for Intel, I can't say much, but you can probably draw your own conclusions.

Quote: Now that is a good job to have! What type of formal education and
career experience do you have if you don't mind me asking? How did you
find your way into this career?
I have a BS in electrical engineering, and worked on processors, microcontrollers, DSP's, and even some mixed signal (analog/digital) stuff. All different kinds of validation and test roles, pre and post silicon. I got interested in computers in high school, and took some programming courses, and decided it was something I could do and enjoy doing it. Back then computing was not as mainstream as today but very much on the rise, and it was a good field to go into, even back then. I thought about being a doctor but decided I didn't like the sight of blood. Big Grin
Can I blame you for broken rdrand in (errata BV54) on Ivy, and gather instructions (errata HSD34, 42, 49), whose workaround appear to be is make them really slow, on Haswell? There's also that USB3 thing everyone is worked up about, a few geeks panicked when they heard VT-x didn't work on SNB-E C1, and at least one guy complained about the broken performance monitors (errata HSD11, 29, 30) on Haswell.
That's a mouthful, and I'm joking so feel free to go these guys take my work seriously and not answer. Smile
Obviously if I'd worked on any of those projects those bugs would not have seen the light of day. Tongue
Aren't at least a couple of those issues the chipset's fault? This guy tests CPUs (I think).
He specified processor validation, not chipset, so yeah. Some of those issues lamedude listed are cpu based though.

Nameless Mofo Wrote:I have a BS in electrical engineering, and worked on processors, microcontrollers, DSP's, and even some mixed signal (analog/digital) stuff. All different kinds of validation and test roles, pre and post silicon. I got interested in computers in high school, and took some programming courses, and decided it was something I could do and enjoy doing it. Back then computing was not as mainstream as today but very much on the rise, and it was a good field to go into, even back then. I thought about being a doctor but decided I didn't like the sight of blood.

Nice. As inspiring as the results are the work itself is horribly tedious and debugging can get extremely repetitive and frustrating. At least for me. That combined with the extreme level of discipline you need to complete the upper level coursework drove me away from this type of engineering despite my interest in it. So I ended up going into IT. I'll likely end up getting paid less and my job won't command as much reputation but at least I'm not pulling my hair out every night anymore. I do still regret it a little bit though. Maybe I'll take another crack at it in a few more years.

Nameless Mofo Wrote:But, if you're referring to logic synthesis vs. hand drawn schematics, I remember at AMD we had a combination approach, hand drawing for timing critical areas and synthesis for other parts. I'm really not as familiar with the back end stuff (synthesis, place & route, timing closure) as the front end. What AMD does now, I don't know. But given their history of layoffs, downsizing etc., it wouldn't surprise me if they did have to rely more on synthesis, as hand drawing schematics and hand place/route is a labor intensive process that requires actual humans, with specialized skills.

I figured as much.


Nameless Mofo Wrote:Yeah, it's a little tricky to talk about such things without getting in hot water.

As an interested consumer it's frustrating. Whenever something goes wrong you always want to know why. Or at least who to blame. But companies will never admit to a fault or throw anyone under the buss unless they absolutely have to, because it makes them look bad. So ultimately we can't get the information that we need to figure out what happened. It's even worse when they outright lie to try and spin things in a more positive light.

Nameless Mofo Wrote:The other thing that kills AMD (and everyone else) is that they don't have the process advantage and know-how that Intel does. That's really huge.

I've always wondered why this is. Is is just a matter of having more money to bring in the manpower and expertise?

AMD was actually up to date and even a little ahead of Intel in manufacturing technology for awhile. So wtf happened. The common theory is that the spinoff of global foundries caused enough disturbance for them to slip behind and they've never been able to catch up since. But since there is no alternate reality where this never happened for us to compare against I guess we'll never know.

Nameless Mofo Wrote:As for Intel, I can't say much, but you can probably draw your own conclusions.

I have plenty of theories but no way to confirm any of them without the same level of access as the companies CEO.
This work does require a pretty high level of detail-orientedness, that's for sure. It requires understanding of the inner workings of the core, and more and more, good knowledge of the SOC that the core goes into. But when you find a bug, it's very satisfying knowing that the customer won't see it.

About the frustration comment, I think companies see "throwing anyone under the bus" as counter-productive and not solving customer visible problems when they arise. The energy is better spent solving the problem and fixing it. About the lying, I agree PR spin has no place. The Pentium divide bug comes to mind, thankfully Intel learned its lesson well from that. So did AMD, in later years when the TLB bug hit they were more forthcoming, with information and workarounds while the silicon fix was being done.

The process thing comes down to Intel having the best process engineers and ability to invest heavily in R&D. And it's been its own foundry since the early days of computing so much of that expertise is home grown and stayed that way. AMD was never ahead of Intel in process, although they got as close as ~6 months behind back at 65nm. I'm not sure how the GloFo spinoff affected things since the fab arm and design arm of AMD were somewhat separate entities. I think the bigger thing that affected AMD was that Bulldozer real life did not match up to the performance projections made. They put all their eggs for the future in that basket, and that misstep along with Intel's better execution on Conroe family, then Nehalem after that, was what really did them in.
Namesless Mofo Wrote:AMD was never ahead of Intel in process, although they got as close as ~6 months behind back at 65nm.

I'm going to have to debate you on that. AMD was only one month behind Intel at 180nm with Athlon Orion/Pluto arriving in November of 1999, one month after pentium III coppermine (and with much better supply/yields). And they stayed within a few months of Intel all the way until 65nm which is when they began to slip behind. So they definitely got closer than 6 months behind.

I'll admit they were never ahead. I remembered athlon being 1 month ahead of coppermine but apparently it was the other way around.

Still my point stands. They were competitive in manufacturing technology until 65nm. Which is around the time that the global foundry spinoff happened. The one thing that makes me think the problems they were having might not have been related was the fact that they were already beginning to slip far behind Intel about a year before the spinoff happened. I'm still curious how they went from 3-6 months behind to almost 2 years behind Intel so quickly. They must have had trouble migrating to some new technique. Maybe they had trouble figuring out how to incorporate immersion lithography. That happened around that time.
It's funny, 180nm is like a bygone era now, that stuff should be in a museum. Tongue As I remember it, at 65nm all foundries (Intel, AMD, TSMC, etc) started running into big leakage issues, Intel included. The transistors leaked a lot of current even when they were off. That's one reason why Prescott ran so hot (old news now, nothing secret about that). That and the fact it had 30+ pipeline stages didn't help matters any. Lots of logic + high speed clocks = high current = space heater.

I don't clearly remember how things got sorted out at that point, but I do remember that Intel went with a gate-last process at either 65nm or 45nm, while AMD/GloFo stayed gate-first. I think this was one of the big reasons, as gate-last helps yield and reliability. Also Intel switched to a high-k gate back around that time too. Bottom line, they solved the technical barriers that came up at 65nm before anyone else, and was able to move ahead while other foundries struggled.

Intel also came up with the tri-gate "3D" transistor, which is a pretty big change in how transistors are formed. I'm sure it's patented, and it's in use starting from Ivy Bridge. Again that much is public, you can read about it on Wikipedia. The 3D transistor has more drain-gate and source-gate surface area, which means less gate current required to "open" the drain-source channel enough for the transistor to be in the "on" state. It also has faster switching times = higher speed. This is one of the recent advances that's helped widen the gap between Intel and GloFo/TSMC/Chartered/etc.

EDIT: wow, I haven't thought that much about process stuff in a long time. Big Grin
Nameless Mofo Wrote:It's funny, 180nm is like a bygone era now, that stuff should be in a museum.

Perhaps, but it still counts. There was a time when AMD was competitive with Intel in manufacturing technology.


Nameless Mofo Wrote:As I remember it, at 65nm all foundries (Intel, AMD, TSMC, etc) started running into big leakage issues, Intel included. The transistors leaked a lot of current even when they were off. That's one reason why Prescott ran so hot (old news now, nothing secret about that).

That happened at 90nm. It got worse at 65nm but 90nm was when Intel, AMD, TSMC, and IBM were all struggling with sudden dramatic increases in leakage compared to previous processes. And Prescott was 90nm, not 65nm.

Nameless Mofo Wrote:I don't clearly remember how things got sorted out at that point, but I do remember that Intel went with a gate-last process at either 65nm or 45nm, while AMD/GloFo stayed gate-first. I think this was one of the big reasons, as gate-last helps yield and reliability. Also Intel switched to a high-k gate back around that time too. Bottom line, they solved the technical barriers that came up at 65nm before anyone else, and was able to move ahead while other foundries struggled.

Now this is a good answer. The tri-gate transistor design is too recent to apply to my question since AMD was already far behind at that point.