Dolphin, the GameCube and Wii emulator - Forums

Full Version: Dolphin CPU hierarchy [UNOFFICIAL]
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
The way I understand the history, Intel saw Itanium as the way forward and it had its advantages, not the least of which was the jettisoning of all the old x86 architectural baggage. However, backwards compatibility with all the old x86 software was high on everyone's priority list, and Itanium's performance at running legacy x86 code sucked. Badly. This was a major strike against it.

About that time AMD came out with x64 (they called it amd64), which is an evolution of x86 (code compatible with x86, breaking of the 4GB addressable memory barrier, simplification of some of the x86 segmentation architecture, and new additional machine registers which was always an x86 handicap, just to name a few). Intel resisted it at first, but then AMD started making inroads because x64 chips ran legacy x86 code at native speeds, and Intel was forced to follow suit. That was a major loss of face for Intel, which sees itself as the owner of x86 and AMD as an interloper.

Whenever Intel has listened to the market, they've usually delivered great things. When they tried to tell the market what it wanted, like with Itanium, not so much so. (think Rambus RDRAM too).
Nameless Mofo Wrote:However, backwards compatibility with all the old x86 software was high on everyone's priority list, and Itanium's performance at running legacy x86 code sucked. Badly. This was a major strike against it.

While this certainly did hurt it I find that this disadvantage is greatly overstated. x86 compatibility wasn't a big deal for the server market where they failed as well. And it wouldn't have been a big issue for the consumer market either if they had managed to get software developers to write applications for IA-64. Ultimately it was the poor performance and high cost that killed its chances. Their trickle down theory would have worked if it had much better performance and lower cost than the competition as they had originally promised.

Nameless Mofo Wrote:about that time AMD came out with x64

It wasn't "about that time". It was a full two years later. Itanium could have offered some resistance against x86-64 if it had gained some traction during those two years. But it didn't.

Nameless Mofo Wrote:Whenever Intel has listened to the market, they've usually delivered great things. When they tried to tell the market what it wanted, like with Itanium, not so much so. (think Rambus RDRAM too).

I agree with this to an extend. However it is a bit of an oversimplification of the factors that killed several of their products. I suppose anything a company does wrong can ultimately be boiled down to "the market didn't want it" since that's what determines the success or failure of any product. In my opinion most of the times when they fail it is due to trusting research that didn't pan out to be fully accurate. This is basically what happened with netburst, itanium, rdram, iAPX-432, and the i960. All of them were designs based on faulty research that caused the products to not meet expectations. And all of them would have succeeded if they had met the initial performance estimates. Often times an idea looks flawless on paper and in the initial testing stages, until you actually try to implement it into a useful product.

I do agree with most of your post though and your general sentiment. I know you've been here for awhile but I haven't really seen you around, so welcome.

lamedude Wrote:
Wikipedia Wrote:In 1989, HP determined that reduced instruction set computer (RISC) architectures were approaching a processing limit at one instruction per cycle.

I guess HP never heard of the superscalar i960.

The wikipedia article is paraphrasing of course. They were referring to traditional RISC (scalar) architectures. As we all know superscalar designs battled vliw designs for awhile in the early 90s until superscalar eventually became the design of choice. Until HP and Intel thought they could make a better implementation of vliw that wouldn't struggle so much with general purpose workloads due to some promising research they found coming out of a russian tech firm.
So a proportion of the story was right, but out of order. This is what happens when I get distracted by wikipedia at 3am, and then don't attempt to recall the knowledge for several months.
Thanks NV, yeah I have been around but mostly lurking cuz IRL keeps me fairly busy. I work in the semi industry so I'm somewhat familiar with tech history and what not. And emus interest me not just because I enjoy the gaming aspect, but also the software/programming aspect. It really amazes me what the devs have been able to create.

As for the 5ghz amd cpu, that's all fine and good but at 220 watt TDP, srsly? My old AMD Athlon 64 X2 desktop is a 90nm 95(?) watt TDP and if I leave it on for a while, the room gets noticeably warmer. At 220 watts, you can just turn off the heater in your house cuz you're not gonna need it.

I'm sure they'll get at least some takers, but in my area electric rates are going up, not down.

Anyway just my .02, don't speak for anyone but myself.
Nameless Mofo Wrote:I work in the semi industry

Oh nice. What profession if you don't mind me asking?

Nameless Mofo Wrote:As for the 5ghz amd cpu, that's all fine and good but at 220 watt TDP, srsly? My old AMD Athlon 64 X2 desktop is a 90nm 95(?) watt TDP and if I leave it on for a while, the room gets noticeably warmer. At 220 watts, you can just turn off the heater in your house cuz you're not gonna need it.

It's an act of desperation. Their architecture lacks the energy efficiency and IPC to compete head on with the latest high end Intel cpus and they know it. At least until steamroller comes around. So in the meantime they're just going to OC some of their existing chips and sell them at a higher cost to try and extract more profit from them and offer a product that is competitive in at least one regard (performance). They will likely have to bundle it with a liquid cooling system. I just don't see them keeping it cool enough in a worst case scenario (hot room + dust buildup) with a medium or large sized HSF. And they'll have to charge a pretty penny for it so I don't see any major OEMs adopting it.

They might as well. They have nothing else new in the meantime that they can offer for high end desktop users.
Actually I work for Intel, although I don't speak for them. I also worked at AMD for a while, so I've seen both sides of the x86 fence, and been doing this for a while. What about you? I must say, you seem quite knowledgeable on CPU history, microarchitecture, etc. It's nice to bump into someone else who's interested in this low-level stuff (my wife just doesn't get any of it).

As for AMD you're probably right, they haven't had anything compelling to offer since Barcelona/Shanghai/Magny-Cours, all of which which I worked on, incidentally. Only we had different names for them internally - that was one difference between AMD and Intel at the time, Intel has always just used one code name internally and externally, like Haswell or whatever. I guess AMD started doing the same with Bulldozer, which was a little bit after my time there.

In the long run, by leapfrogging Intel, all AMD did was awaken a sleeping giant because Intel released Core 2 Duo shortly after Barcelona came out and has never looked back since. AMD does have the better integrated graphics solution, but that's only because they bought ATI.
NaturalViolence Wrote:What profession if you don't mind me asking?

While it's nice to know what companies you've worked for I'm more interested in your profession.

Nameless Mofo Wrote:As for AMD you're probably right, they haven't had anything compelling to offer since Toledo/Windsor/Orleans/Brisbane

Fixed that for you. They've been dead to me ever since conroe.

Nameless Mofo Wrote:What about you? I must say, you seem quite knowledgeable on CPU history, microarchitecture, etc. It's nice to bump into someone else who's interested in this low-level stuff (my wife just doesn't get any of it).

Just a person who likes to know how things work and isn't afraid to read.
let compare amd fx cpu and intel core i5 model support!!!

haswell model support(i5 4cpu no hyperthreading)
All models support: MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, AVX, AVX2, FMA3, Enhanced Intel SpeedStep Technology (EIST), Intel 64, XD bit (an NX bit implementation), Intel VT-x, Turbo Boost, AES-NI, Smart Cache

amd fx piledriver based(8core)(4 cpu 4 thread = 8 core)
model support
Vishera" (32 nm SOI) amd always ahead of intel cpu model support and cpu instruction


force intel to lower their price
i believe higher cpu ghz speed the better
i don't believe in benchmark cause it run straight line and biased to intel
Quote: While it's nice to know what companies you've worked for I'm more interested in your profession.
I do processor validation. We find the bugs before tapeout and before the customers/users do.

Quote: Fixed that for you. They've been dead to me ever since conroe.
Ouch! To be fair, Barcelona was still a good architecture (architecturally an evolution of K8), and competed well with Intel in the big MP (>4 sockets) arena, due mostly to the fact that Hypertransport was better for inter-CPU communication than FSB was, and cache coherency was more efficiently maintained. But like I said when Intel came out with Conroe/Merom/Woodcrest it left AMD in the dust. I remember Barcelona was the first chip design AMD wrote in Verilog, which was a huge deal. K8 was written in an in-house HDL that AMD had an in-house simulator for.
Quote:i believe higher cpu ghz speed the better
Better for what ? More heat ? More power consumption ?
No , thanks
Higher clock speed doesn't mean anything . In Dolphin , An FX 8350 @ 4.0 doesn't stand a chance against i5 2500k @ 3.3GHz
Quote:i don't believe in benchmark cause it run straight line and biased to intel
You don't believe Dolphin / PCSX2 benchmark ?