Dolphin, the GameCube and Wii emulator - Forums

Full Version: What has happened to Intel lately?
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2 3 4
My turn to play armchair analyst! I love this game.

drhycodan Wrote:Ever since the original Core i7, these newer 2nd, 3rd, and 4th gen i7's barely improve upon the original i7 in performance.
drhycodan Wrote:Then Core i7 came out and it also offered a substantial improvement over the Core 2's. But since then, the newer Core i7's barely offer any improvement.

Not true. Sandy bridge was a whopping 40% faster than the 1st generation models on average. Almost as big of a jump as pentium D to core 2 duo was (which was about a 50% jump).

drhycodan Wrote:I remember when the Core 2 came out, it was like day and night compared to a Pentium D/4.

That's because the netburst architecture was a shitty design based on misrepresented research. Anything was better than that. It was so bad that they had no choice but to abandon it completely on laptops back in 2003 with the introduction of pentium M. After pentium 4 prescott made minimal gains in performance over northwood and the architecture remained stagnant for 2 years after that Intel finally released a polished up version of pentium M to the desktop that they called core 2 duo in a last minute desperate attempt to shove netburst under the rug and pretend it never happened. Compared to netburst which had not seen any significant gains in 3 years at that point it looked like a big leap. However had they stuck with P6 and never gone down the netburst road to begin with we would have seen steady 10-20% gains every year from incremental minor improvements to P6 (following the tick-tock development model) instead of nothing for 3 years followed by a sudden 50% increase when they switched to the good architecture. And that's what we saw from AMD during this era who took that route instead.

Now as for why we've been stuck with 10-20% gains in single threaded performance for the last 13 years (closer to 10% recently) that's because of variety of factors in manufacturing and design of semiconductors including:
1. Increasing voltage leakage from smaller transistors
2. Increasing leakage and general power consumption from higher frequency signals
3. Increased power density and heat density. The net results of points 1 and 2.
4. Lack of increase in cooling efficiency. The net result of points 1, 2, 3, and 4 is a rise in temperature.
5. Insignificant increase in thermal tolerance.
6. The result of the above 5 points is extreme difficulty in increasing clock rate. Temperature will inevitably be driven upwards by points 1 through 4 but thanks to point 5 the temperature can't be increased without causing instability. Which makes clock rate increases impossible without significant power use optimizations, which are extremely hard to pull off. This forces designers to rely almost entirely on IPC gains to improve performance.
7. It's getting harder and harder for further IPC optimizations to be made as the "easy" ones have already been done. This means performance growth slows down at the core level as the only two remaining ways to increase performance (IPC and power optimizations) are both very hard to pull off at this point.

You can't expect them to keep making massive improvements to the efficiency of the design forever. There has to be a point where it becomes so efficient that there just isn't much more that can be done to make it any better. I believe we are quickly approaching this point. The only reason they've been able to maintain this 10-20% annual growth for this long is through spending billions of dollars annually on R&D to make these optimizations. Making the impossible tasks possible by just throwing mountains of cash at them (and an army of the best electronic engineers on the planet hired with that cash). They constantly do things that shouldn't be physically possible to get these "minor gains". Many of which are not even disclosed to the public. They've already implied that they have discovered some magic way around the quantum tunneling effect that will hit them in a few years.

Now this brings up a third point. So far all I've talked about is single threaded performance. What about multithreaded performance? Well they could have easily made significant gains there every year. As much as 45% annually. However doing so would have required sacrificing single threaded performance. Bringing the gains down to 0% at the core level. Reducing single threaded performance would allow for massive gains in multithreaded performance by freeing up a lot of power. Since these design tradeoffs would only benefit a small set of applications that could utilize a large number of cores efficiently they instead tried to strike a balance between the two in order to improve performance as much as possible in the range of applications that their typical users use. We haven't reached the point yet where a many core processor would be useful to a typical user. The number of things that it would be useful for is small and most of them are things that only engineers and researchers would be doing or things that are already being done on GPUs instead.

drhycodan Wrote:When is Intel going to release something that offers as much an improvement in IPC as the original i7 did?

Maybe soon, maybe never. Who knows. Even Intel doesn't know that since their products don't always meet the engineers initial predictions once their fabricated. Probably would point towards "never" but nobody can really be sure.

MaJoR Wrote:Uh, wrong. The first gen is Nehalem-Westmere, the second gen Sandy Bridge-Ivy Bridge, the third gen is Haswell-Broadwell. Sandy Bridge was a HUGE leap in performance. So you're wrong there.

Incorrect. Second gen. is sandy, third gen. is ivy, fourth gen. is haswell, 5th gen. is broadwell.

Link_to_the_past Wrote:They have been focusing since the second generation of i7 on reducing power consumption

Not really. More on increasing power efficiency (ratio of performance to power consumption). Load power consumption goes up or down slightly every year but hasn't really changed much. Higher power efficiency enables them to make both products that achieve the same performance with less power and products that consume the same power with better performance. Filling different product niches, which is what they're doing. The actual microarchitecture itself isn't really aimed at either side of the scale though.

Link_to_the_past Wrote:and increasing integrated gpu performance more than increasing cpu power.

I would debate this as well. The vast majority of their budget and design emphasis still goes into increasing cpu performance. With the possible exception of some haswell variants IGP performance growth has remained fairly consistent over the last 10 years including in recent years. There doesn't appear to be any sudden shift towards GPU performance. Just normal growth that happens every year like clockwork. The die is still almost entirely cpu side stuff as well. That may change soon though.

Link_to_the_past Wrote:They moved all their focus to more mobile offerings and i think that the lacking competition from AMD helped establish that.

I slightly disagree here. They sell more laptop cpus because there is more of a demand for laptops. Regardless of how much competition there is from AMD. Even if AMD only had marketshare in the mobile market laptop cpu sales for Intel would still greatly outnumber desktop cpu sales. Making them more inclined to focus on the mobile market.

While there does appear to be some increase in focus on mobile devices I think you might be exaggerated its effect on the microarchitecture. They do focus on improving power efficiency a lot but that's not just because of laptops. It's practically the only remaining means of improving performance regardless of platform. Virtually any optimizations they make to improve performance effect all platforms equally. The architectures design approach is likely fairly platform neutral as it will be used in everything from tablets all the way up to high end servers. They have made some changes that only benefit laptops but most of these were fairly simple changes and reallocating that time elsewhere to other optimizations likely wouldn't have boosted performance significantly. Hell even if they did focus on desktops you likely wouldn't notice any difference because like I said earlier both platforms need the same improvements for the most part to improve.

As far as actual products are concerned both laptop and desktop cpus have grown in performance equally. Your sentence implies reduction on desktops in favor of laptops.

As far as ultra mobile is concerned they are only just beginning to establish any sort of presence at all in that market.

Link_to_the_past Wrote:If AMD was competitive they wouldn't be so lax regarding their cpu performance. Having established their dominance in cpu performance then they shifted their focus to areas they lacked, integrated gpu performance and ultra mobile, tablet, etc. presence.

Yeah, no. This I strongly disagree with. Intel has spent 40 billion dollars annually every year for the last 5 years. The vast majority of that (excluding expenses that they can't spend freely like taxes) goes to R&D for design and manufacturing towards improving cpu performance. AMD could be big, small, or dead and this wouldn't change. They can't sell new cpus if they aren't better than the old ones.

If AMD was more competitive they wouldn't be spending more on improving their architecture because there isn't any additional money they could be spending! Their profits are surprisingly thin due to their super high operating expenses. In fact they would have worse cpu performance because increased market share for AMD would mean less money for them to spend on R&D.

Your second sentence makes it sound like they just waited around until their cpu performance was faster than the competition then stopped improving it in order to focus on other things. Which is not true at all. All three of the things you listed have continued to improve at a fairly constant rate.

delroth Wrote:AMD is far too busy swimming in the PS4 and XBone money to make good CPUs.

They're barely going to make any money off of that. Since they're not handling the manufacturing they're likely only going to make a couple hundred million dollars over the next few years off of the royalties according to what the financial industry analysts are saying. Which is a drop in the bucket to them.

DatKid20 Wrote:AMD doesn't just integrate graphics and call it a apu.

Actually that's exactly what they did with llano. They did exactly what Intel did before them. They just decided to give it a fancy marketing name for no good reason.

In the future they may begin to become more than a typical IGP as HSA begins to take shape. But for now that's all they are.

And that's all I have time for today. I'm not waiting until tomorrow to finish my post. I'll get to you later DarkeoX.
(10-28-2013, 05:43 PM)NaturalViolence Wrote: [ -> ]And that's all I have time for today. I'm not waiting until tomorrow to finish my post. I'll get to you later DarkeoX.
Thanks, I'm by now way expert in CPU design stuff, I hope you correct me if I said some stupid things.
Me thinks we've seen the extent of silicon, time for mankind to discover some new compound in a ditch somewhere.
(10-28-2013, 05:43 PM)NaturalViolence Wrote: [ -> ]
DatKid20 Wrote:AMD doesn't just integrate graphics and call it a apu.

Actually that's exactly what they did with llano. They did exactly what Intel did before them. They just decided to give it a fancy marketing name for no good reason.

In the future they may begin to become more than a typical IGP as HSA begins to take shape. But for now that's all they are.

And that's all I have time for today. I'm not waiting until tomorrow to finish my post. I'll get to you later DarkeoX.
Not really. They were even got a university to research how much HSA would benefit llano. They didn't just decide out of the blue that hsa was the way to go.
That doesn't make any sense or have anything to do with what you quoted. What study are you referring to? I can't find anything even slightly related to this on google. Llano does not have any HSA features, period. The very first minor HSA features were introduced with trinity. The first major leap is going to be huma, which will be introduced with kaveri. Full HSA support doesn't come until 2015 with carrizo. Llano is literally just silicon level integration of the GPU, just like sandy bridge. Nothing more. It was the first stepping stone towards further integration. And they chose to call it an APU despite it being nothing more than a regular IGP. That makes your statement incorrect which is why I pointed it out. Trinity and richland are also pretty much just a regular IGP as well.

Doing a study on the theoretical benefits of HSA to llano doesn't make any sense. What exactly was this study about? The vast majority of code can't even be properly ported to openCL so the only real thing they could study would be the effects of using HSA acceleration vs. no GPU acceleration in a specific application. I'm not even sure how they would go about testing this since it doesn't exist yet.

Most likely this study you're referencing doesn't refer to llano, but something more recent. Or it's just talking about openCL, not HSA. Without a link I can't say for sure. But what you've implied so far doesn't add up.

Both AMD and Intel have spent a lot of money in recent years reaching out to universities and big software businesses to try and push education of parallel programming both on and off the GPU. And they have both been known to use universities for 3rd party data collection in regards to this. So you're likely just referencing one of these openCL demos.

As for why they are focusing on HSA that's a whole other topic. But I'll go ahead and delve a little bit into that here. They've been working on HSA since at least 2005 so I really doubt this study that you're referring to had any bearing on whether or not they went through with it. Their plan for HSA wasn't even disclosed to investors for at least a year let alone the public. The decision to invest billions of dollars and many years into this long term project was made by dirk meyer even before the merger with ATI was proposed. HSA can provide significant benefit for certain tasks if the code is properly optimized (which despite what they've been saying publicly can be tricky) but is completely useless for most things. That being said I still look forward to it since it will help us do some very interesting things with computers in the future. But yes they did decide to go with it "just out of the blue" regardless of how useful it will actually end up being. Of course "out of the blue" also includes internal research and theories devised by the engineers working on it. But they started developing it before they started trying to convince people to use it. They didn't wait around until other organizations thought it was a good idea. If they had they would have come into the game way too late to make any difference in the market. Big businesses pretty much always operate this way.

If HSA really is an open platform Intel will easily be able to copy everything that AMD is trying to do and might even make a better implementation since they have access to better manufacturing technology and a crapton more financial resources for R&D. In recent years openCL actually seems to be the one area where Intel IGPs are dominating AMDs. Which is a bit ironic considering AMDs current marketing platform for APUs is all about the how openCL acceleration for future applications will make their APUs faster than the competition. Everybody seems to be making HSA out to be some sort of super revolutionary platform that will make Intel cpus obsolete and shift the balance of power back to AMD. After seeing people say the same things about x86-64 before I am skeptical that it will really give AMD that much of a boost.

The following is entirely baseless speculation:
I think that what will probably end up happening is it will give AMD a slight lead in a few applications for 6-12 months until Intel implements it. Effectively killing any advantage they worked for. They won't be able to get strong enough software support fast enough for it to be a significant tide turning event. That's what happened with x86-64 anyways. I do find it very surprising that Intel didn't already get on board with HSA. But they could still be secretly planning to support it in the future if it takes off. Intels massive capital has allowed them to do sudden 180 degree turns in the past with ease in order to survive.
Dated 2012. More recent than llano.

Also if you read the annotations at the bottom:
ExtremeTech Wrote:Now, unfortunately we don’t have the exact details of how the North Carolina researchers achieved this speed-up. We know it’s in software, but that’s about it. The team probably wrote a very specific piece of code (or a compiler) that uses the AMD APU in this way. The press release doesn’t say “Windows ran 20% faster” or “Crysis 2 ran 20% faster,” which suggests we’re probably looking at a synthetic, hand-coded benchmark.
ExtremeTech Wrote:Updated @ 17:54: The co-author of the paper, Huiyang Zhou, was kind enough to send us the research paper. It seems production silicon wasn’t actually used; instead, the software tweaks were carried out a simulated future AMD APU with shared L3 cache (probably Trinity). It’s also worth noting that AMD sponsored and co-authored this paper.

Updated @ 04:11 Some further clarification: Basically, the research paper is a bit cryptic. It seems the engineers wrote some real code, but executed it on a simulated AMD CPU with L3 cache (i.e. probably Trinity). It does seem like their working is correct. In other words, this is still a good example of the speed-ups that heterogeneous systems will bring… in a year or two.

So basically what I thought. It was synthetic simulation of future HSA features that don't exist yet. It had nothing to do with llano or any real applications.

Edit: It also seems the study wasn't released so we don't know the details of what exactly they did beyond "they sped something up by 20% somehow". We don't even know which specific features caused the speedup.

Edit 2: I'm still not entirely sure what you're trying to prove here, what this study has to do with it, or how it disagrees with the points I raised.
All i was saying that llano wasn't just a cpu and gpu put together. It was a building block for HSA. Intel's igpu is in there for light tasks that don't really stress a normal gpu.
DatKid20 Wrote:All i was saying that llano wasn't just a cpu and gpu put together.

Yes it is. See above.

DatKid20 Wrote:It was a building block for HSA.

Just because they were planning to have HSA IGPs in the future (with an entirely different type of GPU and CPU microarchitecture I might add) doesn't make llano anything more than a regular cpu with an IGP. And they chose to call it an APU. Making the term as it stands now fairly pointless.

DatKid20 Wrote:Intel's igpu is in there for light tasks that don't really stress a normal gpu.

The same goes for AMD though. They leapfrog each other every few months but overall AMDs IGPs are a little bit faster. They both support the same APIs too.
Double posting in case he doesn't see it.

Darkeox Wrote:AMD is progressively shifting their CPU business to APU,

What exactly does this mean? An APU is just a cpu with an IGP. And for awhile both companies have had IGPs in most of their cpus. What exactly is shifting here?

Darkeox Wrote:as they saw that APU can be quite attractive for your typical casual gamer who wants to watch HD stuff and do more usual stuff with his computer. He/She wants something that doesn't cost too much but gives room for improvement and allow him/her to play BF 3 / Dota / LoL / CoD / WoW at medium settings possibly with some shiny effects without loosing too much FPS. An 140$ A10-6800k can do this for them just fine, and I think that even Intel understood the huge potential that those APU solutions, that look bastardised for people who build gaming rigs, can be towards casual gamers market. When you buy something like that, it's clearly not to play Crysis 3 (even at low) or Metro LL. So I think those systems can take over the regular CPU+ dedicated GPU provided that they continue to improve. Thus they could very well, just as they owned the low-midrange GPU market, snarf the low-mid-range CPU market.

Well they aren't selling too well. And I think I know why. Most of the applications people use rely entirely on cpu speed. Only a small set of computer users are "gamers". And most gamers won't touch IGPs with a 10 foot pole due to their performance still being abysmal compared to discrete solutions. And for everything else having a fast IGP doesn't make a difference. So they end up having a product that's only really marketable to a small subset of a small subset of users (low end gamers).

This is why AMD is pushing so hard to get developers to use the GPU for general purpose applications. They can't compete with Intel in cpu performance anymore so this is their only chance at turning things around.

Darkeox Wrote:Which is why they positionning themselves on the console market doesn't mean they abandon or give less consideration to CPU. Certainly there's money, but there's also the experience of building systems where CPU & GPU get more and more close to each other and where software can speak more directly and more easily with GPU for stuff that goes on on display.

Well that experience didn't really result from this project. It resulted from the research that they were doing anyways. Long before they were approached for designs by microsoft and sony.

They were really the only logical choice this time around. They're the only company right now that can provide both x86 support and a fast GPU on one chip. Every other ISA except POWER and IA64 have been abandoned for high performance cpus now. Both of which are only suitable for servers, offer no IGP support, and are controlled by companies that don't offer flexible designs.

Unless HSA really takes off in a big way I don't think APUs with GPU emphasis are a great idea for PCs. It has a very awkward position in the market right now. On consoles they make a lot of sense since consoles don't need good cpu performance. So that die area and power budget can be used for a bigger GPU instead.

Darkeox Wrote:Even the Mantle API seems a push into that direction.

Mantle doesn't have a whole lot to do with this.

Darkeox Wrote:However, I still believe Intel APU by themselves still have some ground to cover before they can become as attractive as AMD APUs are performance/$ wise.

For games yes. Though lately the gap between the two is pretty small. And in fact ever since Haswell Intels top of the line IGP outperforms AMDs across the board.

For the longest time Intel has barely dedicated any budget to IGPs. They were considered unimportant to end users so why bother? Which is part of the reason why they have historically sucked. In the last few years though their R&D budget for GPU technology has been tripling annually. They're getting ready to put more of an emphasis on them in the future in case AMD is right about the shift to GPGPU. If this continues in a few years Intels IGP drivers and performance could actually be better than AMDs. They certainly have the money. Right now the biggest thing holding them back in my opinion is actually the quality of their drivers, not the performance of the IGPs. The performance they're putting out right now is very competitive but their drivers still suck compared to AMD.

AMD has found itself in the position of having an idea ready in the form of a product and trying to push the market to adopt it in order to be successful (pulling an apple as I call it). While Intel seems to be taking the more conservative "wait and see" approach and adapting to the market as it changes. At least that's how I see it.

Darkeox Wrote:If I remember correctly, a 3770k + HD 4000 was performing in-game a (tiny) bit worse than the A8-3850 (2.5-6 Ghz + 6550), so depending on games they might be on-par (of course, this is casual gamer stuff : we don't want HBAO, MSAA x8, V-Sync or TressFX, and we play preferably at 720p). That was like over a year ago, and you can already see that we're not really in the same price category, yet the delivered performance for the application that interest us is roughly the same. Well if you consider the power of the CPU itself, of course it would cost more. But the thing here is as a self-contained casual gaming solution, the AMD APU wins by far, and since then we've got the A10. I don't know how things went for Haswell, and I hear Intel are planning for HD 5000 and stuff but right now, if I wanted a ~300$ rig that do usual stuff + being able to play some new games at low-medium and knew that I wouldn't be touching it for the next 2-3 years, I'd go AMD without a doubt.

Where have you been? Haswell and the HD 5000 are both already out.
Pages: 1 2 3 4