Dolphin, the GameCube and Wii emulator - Forums

Full Version: The Legend of Zelda: The Wind waker CPU Benchmark
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
jbone1337 Wrote:Well the 3650 is rated higher then a 5470m, due to the 128bit bus compared to the 64bit bus, so it will have a higher memory bandwidth. And it has 40 more unified shaders, but a slower core/shader clock rate (reference clock), and a slower memory clock.

It doesn't matter whether the higher shader throughput is due to a higher SP clock rate or higher SP count. All that matters is that it has a higher shader throughput. Likewise it does not matter if the higher memory bandwidth is due to a higher memory clock rate or wider memory bus. All that matters is that it has a higher memory bandwidth.

These are by far the two most important specs other than architecture (and we already discussed why the different architecture won't significantly impact performance). Pixel, vertex, and texture fillrates have essentially become almost useless specs in modern games since they are almost always bottlenecked by video memory bandwidth and modern games are shader driven. Take a look at any modern game benchmark and you'll see what I mean. If the SP architecture is the same usually the only specs that drive performance are peak shader throughput and video memory bandwidth.

Quote:But in actual gaming performance at a 720P resolution (1280x720/1366x768), they will score about the same (3650 would get maybe 1-2 fps more lol), which that level of difference is negligible.
If you were playing at like 640x480 or something, there would be a larger difference in performance between the 2. But really thats not a real world benchmark when benchmarking a gpu, since no one is ever going to be really playing at that resolution....

I really doubt this. Common sense dictates that if all of the important specs are much higher and the SP architecture is the same the performance will be much higher unless there is an outside bottleneck. It's a much faster chip so why would it perform almost the same? It should perform 45% better under a worst case scenario (difference in shader throughput) and 90% better under a best case scenario (difference in memory bandwidth) if I'm not mistaken.

Now I have posted a reputable source stating a significant performance difference between the two. You have only provided one source so far. And your source is a notoriously unreliable benchmark that pits the 5470m against the AGP version of the 3650 (which is much slower than the PCI-E version, especially in synthetic tests) and doesn't state whether the 3650 is the more common GDDR3 variant or the less common DDR2 variant. Considering that the 3650 is only scoring 2/3 as high as the 5470m in that benchmark it's safe to say that the AGP bus is severely degrading its performance. This is why I hate passmark, wacky out of place results all over the place. The 3650m by the way has the same low 120 GFLOP/s throughput and 22.5 GB/s of memory bandwidth. Unless the test system was using poorly optimized drivers (which could be the case since these are user benchmarks not professional benchmarks) common sense would once again dictate the the performance would be the same or higher yet is is 17% lower. Once again something is clearly off. This is why you need to find professional benchmarks, preferably games. Otherwise you end up with this kind of nonsense.

Also for the record GPUs with siginficantly higher video memory bandwidth but similar shader throughput show higher performance deltas at higher resolutions.
GPUs with siginficantly higher shader throughput but similar video memory bandwidth show higher performance deltas
at lower resolutions.
GPUs with siginficantly higher video memory
bandwidth and significantly higher shader throughput show high performance deltas
at all resolutions. The difference in performance deltas between higher and lower resolutions depends on how much higher or lower the video memory bandwidth delta is compared to the shader throughput delta.

This of course all assumes a GPU bottleneck.
(03-29-2013, 12:20 PM)admin89 Wrote: [ -> ]
Quote:it does seem that the intel integrated graphics are definitely bottlenecking this cpus performance.
I doubt that . Intel HD 4000 is faster than AMD 5470
Both i5 3210M & i5 2450M have same turbo boost 2.9GHz when 2 cores is active . Ivy Bridge is 5-10% faster than Sandy . You should get more FPS with i5 3210M
Try : Open Intel Graphic Control Panel - 3D - Set everything to max
it was set to max, and the intel drivers are the most up to date, and it was using ddr3-1600 memory in dual channel.

The intel HD4000 is hard to get clean benchmarks on due to the fact there are too many of them. they are on too many cpus, a cpu like an i7-3770k is really going to throw off the average scores of these graphics.
I would say its performing about how it should for a mobile core i5. With better cooling, it may get alittle better performance (like modding), and like i said a little bit more tweaking on the os side of things, probably could have netted a few more fps aswell


@naturalviolence[color=#C68E17][/color]
I know the "theoretical" performance of it is much higher. and it probably is like 75-100% faster then the 5470.
but 100x0=0.... in a game, that huge difference, is almost no difference, and will only be a few fps difference (unless playing something that has like zero requirements like the sims3 or something).

I have no idea how you are not understanding this.... benchmark software IS NOT EVEN CLOSE TO REAL WORLD PERFORMANCE.... you have 1 benchmark, from 1 source, using benchmark software.... thats laughable. not to mention, those are like rounded benchmarks, like what cpu were they using for the 3650 benchmark, as the 5470m is obviously only going to be in laptops, so just that fact is going to make it inaccurate. And there is no clear "measurement" of what these different "tiers" of graphic card differ in performance.


if you were to get the same specs (or as close as possible), and test them side by side on say dolphin at 2x resolution, or say on cod:blops2 on lowest settings at like 1280x720 or 1024x768, they would get almost identical performance. (give or take 1-2 fps)
jbone1337 Wrote:I know the "theoretical" performance of it is much higher. and it probably is like 75-100% faster then the 5470.

The specs are much higher. Specs and architecture are directly proportional to real performance. You cannot expect a chip that is 45%+ faster in every important spec to perform the same.

jbone1337 Wrote:but 100x0=0....

?????

What does this equation represent?

jbone1337 Wrote:in a game, that huge difference, is almost no difference, and will only be a few fps difference (unless playing something that has like zero requirements like the sims3 or something).

That makes no sense. You're basically saying that a much faster chip will make little difference in game performance. Why do you believe this? That's just crazy. If that were the case there would be no reason for anyone to ever update their graphics card.

Also a much faster chip will show less of a performance advantage when playing a less demanding game, not more. This is because it is more likely to run into a cpu or framelimiter bottleneck.

jbone1337 Wrote:I have no idea how you are not understanding this.... benchmark software IS NOT EVEN CLOSE TO REAL WORLD PERFORMANCE....

I do understand that. That's why I told you not to use passmark. It's a synthetic test.

jbone1337 Wrote:you have 1 benchmark, from 1 source, using benchmark software.... thats laughable.

What I have is a benchmark analysis chart made from game benchmarks and synthetic benchmarks. The chart did not list the sources (they are extrapolated from tomshardware graphics card benchmarks which you can find individually on the site) so I don't know why you assumed that this was from one benchmark, or why you assumed that the benchmark was synthetic.

I'm also not sure why you would mention this since the only source you have provided so far is a single synthetic benchmark. So why would you call that laughable if that's exactly what you provided?

jbone1337 Wrote:not to mention, those are like rounded benchmarks,

Meaning?

jbone1337 Wrote:like what cpu were they using for the 3650 benchmark, as the 5470m is obviously only going to be in laptops, so just that fact is going to make it inaccurate.

Game benchmarks are almost always bottlenecked by the gpu. You can look up the test system setups that they use in their individual benchmarks. They almost always use the fastest cpu currently availible on the market or close to it to ensure accurate results. I'm surprised that you pointed this out. Have you never been to tomshardware before?

But hey if you don't like tomshardware here are some other 3650 benchmarks that backup their hierarchy:
http://www.legitreviews.com/article/652/3/
http://www.tweaktown.com/articles/1303/sapphire_radeon_hd_3450_and_hd_3650/index.html

jbone1337 Wrote:And there is no clear "measurement" of what these different "tiers" of graphic card differ in performance.

This is true. Because it's a hierarchy chart based on their archived benchmarks. Once again I'm guessing that you've never been to tomshardware.com before. It's one of the most respected and renowned gpu benchmark sources on the web. I can't really think of a more trustworthy source except maybe anandtech. Unfortunately since the HD 3650 was never a very popular card and is quite old at this point anandtech never benchmarked it so I can't use them. I'm still wondering why you picked the 3650 out of all of the cards that you could have picked for a comparison.

jbone1337 Wrote:if you were to get the same specs (or as close as possible), and test them side by side on say dolphin at 2x resolution, or say on cod:blops2 on lowest settings at like 1280x720 or 1024x768, they would get almost identical performance. (give or take 1-2 fps)

If you had two gpus with the same architecture and similar specs they would indeed perform similarly. But the two gpus you are comparing have very different specs. One is very clearly much faster than the other. It is impossible for a much faster chip to produce similar performance unless there is an external bottleneck. This is just common sense backed up by every game benchmark ever done.

Now if you're going to hijack a thread to argue with me about GPU performance and tell me that much faster GPUs don't significantly impact game performance unless they are old games you could at least do the courtesy of posting some reputable sources? Since you haven't I must ask for them again.
idk why you feel the need to come after with me with such determination lol

You seem to no be able to comprehend or understand things very well at all....
You started this because of my first post.... (well the one after the bench posting), and yet you dont understand why i choose a 3650? really? what was the reason you started arguing with me then? O.O

since you are obviously are slow upstairs, ill repeat it...
on the FAQ page for dolphin, it lists that if you want to play at 1x resolution you should have a card around the performance of a 3650. And the reason i brought that up is because it was commented that the 5470m wasnt powerful enough to handle dolphin at 1x resolution and would bottleneck the cpu that was used in the test.
I basically said the 5470m was enough, and its about the performance of a 3650.

again, i have NO clue why you decided to pick apart that one detail, and are so overzealous about it...

and you still dont seem to get it, ill try one mroe time to explain it to you as simple as possible...
the 3650 is better then the 5470m.
but the 3650 is a SUPER SUPER SUPER LOW END graphics card by todays standards. 120shaders. Integrated graphics these days are equal or better then it....

ok a visual example
5470m = 1 penny
3650 = 2 pennies
now 2pennies is 2x as much, but its still 2 pennies, which isnt going to be enough for anything....
modern games would even see a difference in them because they are both horrid garbage....
like crysis3, 5470m= 7fps, 3650=8fps. (for an example) neither are enough to do anything with, and you would literally not even be able to really tell the difference.

maybe one more visual example to help you
[----------------------------------------]
^---------------------------------------^
intel hd 3000------------------------gtx titan
on this type of scale between modern low end gpu and high end gpu, the 5470m and 3650 are dead even.
A modern game is going to seem them both and just label them as a piece of shit. there wont be any difference, it will just be piles of shit as far as the game is concerned.

And omg herpa derp derp, i was talking benchmarks in low spec gaming 3650 vs 5470m, so OFCOURSE framelimiting would be disabled.... omg like really? are you THAT stupid... wtf is going on in your head...."o derp there wouldnt be any difference in low spec games if you play on low settings and low resolutions if you have the fps capped at 60 and they are both running at 60"...

For the record, i have seen quite a few wrong or "off" benchmarks on tomshardware.... (which usually points to them being paid off to do bias benchmarks, which like 99.99% of all "professional" benchmarkers end up doing). There is no such thing as a "reliable source" for benchmarks. the only one is just averaging every benchmark you can find. You cant base anything off one sites benchmarking.
And i dont "use passmark"... i was just lazy and posted the first google result.... as i dont need any "benchmarks" for this anyway, since this really doesnt even matter....


you may know more about hardware then me, but your logic, critical thinking, problem solving etc side of your brain seems to not be functioning at all.....

well i think im done with this thread....
do the world a favor, dont procreate...
@jbone1337 - Please refrain from ad hominem attacks against other users. If you're debating or discussing something, keep it about the topic at hand.
I will give you one more chance to explain your arguments and back them up with data. After that I simply can't keep letting you sling insults at me while waiting for you to deliver the necessary data.

jbone1337 Wrote:idk why you feel the need to come after with me with such determination lol

It's nothing personal. When someone says something that is factually incorrect I point it out to them and ask them if they can provide any sort of data to back it up.

jbone1337 Wrote:You seem to no be able to comprehend or understand things very well at all....
jbone1337 Wrote:since you are obviously are slow upstairs, ill repeat it...
jbone1337 Wrote:and you still dont seem to get it, ill try one mroe time to explain it to you as simple as possible...
jbone1337 Wrote:And omg herpa derp derp,
jbone1337 Wrote:omg like really? are you THAT stupid... wtf is going on in your head...."o derp there wouldnt be any difference in low spec games if you play on low settings and low resolutions if you have the fps capped at 60 and they are both running at 60"...
jbone1337 Wrote:you may know more about hardware then me, but your logic, critical thinking, problem solving etc side of your brain seems to not be functioning at all.....

well i think im done with this thread....
do the world a favor, dont procreate...

I would just like to point out that I have never personally insulted or attacked you. And that throwing ad hominem attacks at someone rather than backing up your points tends to reflect poorly on you for anyone who might be reading the post.

jbone1337 Wrote:You started this because of my first post.... (well the one after the bench posting), and yet you dont understand why i choose a 3650? really? what was the reason you started arguing with me then? O.O

Well if you go back and read my first response you'll see that I was challenging your claim that the 5470m was just as fast as a 3650.

jbone1337 Wrote:on the FAQ page for dolphin, it lists that if you want to play at 1x resolution you should have a card around the performance of a 3650. And the reason i brought that up is because it was commented that the 5470m wasnt powerful enough to handle dolphin at 1x resolution and would bottleneck the cpu that was used in the test.
I basically said the 5470m was enough, and its about the performance of a 3650.

Well you didn't point out the source of your quote so that's why I asked. This makes a lot more sense. However I would not take too much stock in the FAQ since it is mainly intended to offer complete newbies a general rundown of how demanding dolphin is. It is not intended to give a detailed or accurate analysis of the requirements of dolphin. Even Starscream, the FAQ creator will openly admit this if you ask him.

For a more detailed analysis as well as benchmark data consult the minimum specs - video card thread: http://forums.dolphin-emu.org/Thread-minimum-specs-video-card

jbone1337 Wrote:again, i have NO clue why you decided to pick apart that one detail, and are so overzealous about it...

I picked apart a lot more than just that one detail.

jbone1337 Wrote:the 3650 is better then the 5470m.
but the 3650 is a SUPER SUPER SUPER LOW END graphics card by todays standards.

Both of these statements are correct.

jbone1337 Wrote:120shaders.

Stream processors is the correct term, but yes.

jbone1337 Wrote:Integrated graphics these days are equal or better then it....

Some are. Most still aren't. The Intel HD 4000 performs similarly. Only the top end trinity and llano IGPs can beat it.

jbone1337 Wrote:ok a visual example
5470m = 1 penny
3650 = 2 pennies
now 2pennies is 2x as much, but its still 2 pennies, which isnt going to be enough for anything....

True but I think you'll agree that 100% is a big difference.

jbone1337 Wrote:modern games would even see a difference in them because they are both horrid garbage....
jbone1337 Wrote:neither are enough to do anything with, and you would literally not even be able to really tell the difference.
jbone1337 Wrote:A modern game is going to seem them both and just label them as a piece of shit. there wont be any difference, it will just be piles of shit as far as the game is concerned.

A chip that is twice as fast will perform twice as well if there are no outside bottlenecks. This is just common sense. And doubling the framerate of any game makes a very noticeable difference.

jbone1337 Wrote:like crysis3, 5470m= 7fps, 3650=8fps. (for an example)

Source?

jbone1337 Wrote:maybe one more visual example to help you
[----------------------------------------]
^---------------------------------------^
intel hd 3000------------------------gtx titan

What is the point of this illustration? All it points out is that a gtx titan is much faster than an Intel hd 3000. What does that have to do with the performance of a 3650 relative to a 5470m?

jbone1337 Wrote:on this type of scale between modern low end gpu and high end gpu, the 5470m and 3650 are dead even.

Source? The specs of the 3650 are much higher.

jbone1337 Wrote:i was talking benchmarks in low spec gaming 3650 vs 5470m, so OFCOURSE framelimiting would be disabled....

Well a lot of PC games have built in framelimiters that are impossible to turn off without modifying the soruce code since there is no built-in option. And since most games are also closed source this means it is impossible to turn them off. When you're benchmarking low end games with low end settings you run into these framelimits which diminishes the performance delta between different gpus. This is why all good gpu benchmarks use demanding games and high settings, to maximize performance delta.

jbone1337 Wrote:For the record, i have seen quite a few wrong or "off" benchmarks on tomshardware....

Such as?

jbone1337 Wrote:(which usually points to them being paid off to do bias benchmarks, which like 99.99% of all "professional" benchmarkers end up doing). There is no such thing as a "reliable source" for benchmarks.

Ok so now you are claiming that most professional benchmarking organizations are paid off to do bias benchmarks and are unreliable. Thus automatically invalidating any and all benchmarking data. Do you have any specific examples or this or any evidence?

jbone1337 Wrote:And i dont "use passmark"... i was just lazy and posted the first google result....

Ok then do you have any other data?

jbone1337 Wrote:as i dont need any "benchmarks" for this anyway, since this really doesnt even matter....

If you're going to claim that the two chips perform similarly you will need benchmarks to prove it. Because the specs do not back up that claim.

So far you have made the following claims which you need to back up:
1. The 5470m achieves about the same performance as a 3650 graphics card
2. A much faster GPU will not significantly impact performance
3. GPU performance delta increases as GPU load decreases (older games, lower settings)
4. All or most professional benchmark sources are corrupt, bias, and unreliable.

I apologize to SS for letting this go on as long as it has. Unless he has actual data to bring to the table in his next response I won't derail the thread any further.
Just to clear some things up here:

(03-31-2013, 07:00 AM)NaturalViolence Wrote: [ -> ]However I would not take too much stock in the FAQ since it is mainly intended to offer complete newbies a general rundown of how demanding dolphin is.

You don't have to be a "noob" to find the FAQ useful.

(03-31-2013, 07:00 AM)NaturalViolence Wrote: [ -> ]It is not intended to give a detailed or accurate analysis of the requirements of dolphin. Even Starscream, the FAQ creator will openly admit this if you ask him.

The hardware requirements not being detailed was intentional. It's supposed to give people an idea of which hardware to use (or not use) at a glance, it's not supposed to take hours to read or be confusing (like some other guy's attempt at hardware-related threads). Smile

As far as you saying that it's not accurate, that's not true. It is accurate. And for the record, people should put stock in the FAQ. It has accurate descriptions in every subject that is discussed there (most of which were not written by me).
I have already had this debate with you before and will not enter into it again here unless you want me to. I disagree with its content and structure. Mostly I view the lack of detail as being too severe to draw anything more than the most basic conclusions which will not hold up under a lot of circumstances. It doesn't account for a number of factors that greatly impact performance such as OS, settings, and video backend (among other things). And it doesn't mention the vast majority of cpu and gpu hardware at all. Even my threads as convoluted as you seem to view them to be is nowhere near as detailed as I originally wanted it to be before I stopped working on it. But at least someone with an AMD FX cpu for example would be able to look at them and figure out where they stand. Perhaps this is just a difference of opinion but I do feel quite strongly about this.

Anyways I guess this doesn't really effect my debate with jbone1337 since regardless of how I feel about the FAQ it doesn't really have any impact on any of his claims.
(03-31-2013, 07:33 AM)NaturalViolence Wrote: [ -> ]I have already had this debate with you before and will not enter into it again here unless you want me to. I disagree with its content and structure. Mostly I view the lack of detail as being too severe to draw anything more than the most basic conclusions which will not hold up under a lot of circumstances. It doesn't account for a number of factors that greatly impact performance such as OS, settings, and video backend (among other things). And it doesn't mention the vast majority of cpu and gpu hardware at all. Even my threads as convoluted as you seem to view them to be is nowhere near as detailed as I originally wanted it to be before I stopped working on it. But at least someone with an AMD FX cpu for example would be able to look at them and figure out where they stand. Perhaps this is just a difference of opinion but I do feel quite strongly about this.

Anyways I guess this doesn't really effect my debate with jbone1337 since regardless of how I feel about the FAQ it doesn't really have any impact on any of his claims.
/facepalm....
you are basing your "facts" off of "benchmarks" and are saying becuase im not using a similar method for mine, its invalid, and that benchmarks are the 100% end all be all of factual information....
benchmarks are a "GUIDELINE" not FACTUAL INFORMATION.....

you are stuck on "since the 3650 is 2x better then the 5470m is MUST SCORE ATLEAST 2x HIGHER FPS"

Heres some real world testing + some of your "renown behchmarks" you love so much.
http://www.anandtech.com/show/2616/5
http://www.tomshardware.com/reviews/gaming-graphics-card-review,3107-7.html

the 4670 is 6 "tiers" higher then the 3650. (9 above the 5470m)

In benchmark above, its "almost" 2x better at the highest delta.
so if 6 "tiers" = ~2x performance.
Then by that logic, the 3 tier seperation between 5470m/3650 obviously cant be 2x the difference in performance....


Now as for the real world testing.

game, runes of magic
test area, HoS instance
settings medium @ 1366x768 resolution (laptop), 1440x900 (desktop)

amd turion II x2 @ 2.4ghz, 4GB of ddr3 1066, amd 5470m (512MB GDDR3)
intel core 2 dou @ 2.93ghz, 4GB of ddr2 800, amd 4670 (1GB GDDR3) overclocked to 725-750mhz (dont remember exacts)
my laptop vs my brothers desktop, same settings, same area other then resolution (1366x768 vs 1440x900)

laptop = 20-40FPS
desktop = 30-60FPS

explain that one to me one to me. the 4670 should EASILY get 10x+ performance by your explanation. and the fact its also a better cpu. Its at a higher resolution but it also has a 128bit bus with 1GB video buffer vs 64bit 512mb.
(also his desktop has literally NOTHING i mean NOTHING running the background. hes installed like 3 things to his desktop lmao)
It wasnt an "ideal" test, as i wasnt going for testing, i was just playing the game on both, and noticed the difference in fps wasnt as much as i thought it would be.

So ya, better cpu, FAR better gpu, less background tasks running, yet doesnt even score 2x the performance. Ya at 1366x768 it would probably hit 2x, but still basing off "your benchmarks" it should score 10x+ performance....


Im basing mine off generalization and REAL testing... not "benchmarks" which 9.9/10 times are false (ive gotten as much as 2x performance of my 5470m using the EXACT i mean EXACT same settings as some benchmarks). SO its like they were running like hardcore 3d model rendering in the background or something during tests i swear....

and literally all your replies in your last post serious make me facepalm to the point i think caused myself damage.... you are seriously like... WOW.....
you are doing hardcore trolling, as i wont accept the fact someone could possibly be THAT retarded....

and if you SERIOUSLY think big name benchmark people arnt payed off for results, you SERIOUSLY KNOW NOTHING about how the world works lol (real world, not w/e fake fantasy world you live in...)
You have still yet to show one shred of reputable evidence that pro hardware reviews such as toms or anand are payed off. As for your little test on runes of magic. That is pretty much worthless. You need to have the resolutions the same so that the gpu is pulling the same amount of work in both cases. Also the lower you set the graphics settings the more you begin benching cpu instead of gpu. The 4670 could still have a bit of headroom where the 5470m may not. Where is the average fps as recorded by a program like fraps. Average fps is far more important than min or max fps. Still 10 fps at worst and 20 fps at best is pretty significant.