(02-18-2013, 04:23 AM)Starscream Wrote: [ -> ] only you'll be able to use 3-4xIR without losing speed.
YES!!!!!!! Thats what i like to hear HD all the waay ill finally get to use my 1080P moniter to the max
Well if you like IR more than speed, everyone's happy.
yodenny Wrote:So i recently started hearing these terms and im a little confused to my little understanding is gpu the graphics card and cpu the pc?
NaturalViolence Wrote:Processor: A component in a computing system that processes (manipulates) data.
CPU (central processing unit): The part of a computing system that executes the instructions of a computer program. It is the main processor in a computer and it is a general purpose processor (not specialized towards any specific type of task) instead of a specialized processor. A computer may have multiple cpus.
Microprocessor: All of the components of a processor integrated onto a single chip.
APU (accelerated processing unit): A marketing term used by AMD to refer to a GPU and CPU on a single chip. Intel does this too but doesn't use the term APU, they just call them CPUs.
GPU (graphics processing unit): A microprocessor designed for graphics processing.
IGP (integrated graphics processor): A GPU with no dedicated video memory. It is usually integrated into the chipset, northbridge, cpu package, or cpu die.
From my dictionary (thread):
http://forums.dolphin-emu.org/Thread-cpu-microarchitecture-hierarchy
Revised to remove extra information that is irrelevant to this thread.
!!!Warning wall of text that doesn't go anywhere below!!!
To further elaborate on the GPU. Back in the 90s companies starting making products to boost the performance of graphics applications on PCs. Up until that point most consumer level computers had only one processor (known as the cpu) that performed all of the computations for all programs. Obviously this was done because having different specialized chips for every little thing is extremely expensive from a hardware perspective and extremely difficult to program from a software perspective. So instead we just had one processor designing to do "general math and logic" and all tasks were reduced down to a series of basic math/logic steps to be carried out by that chip. CPUs were already pretty fast by then (at least for what most of the population was using them for) and were continuing to get dramatically faster every year with no end in sight. However visual applications require an enormous amount of computational power. As such a lot of cool stuff was far out of our reach (real time 3D rendering for example) because CPUs at the time simply weren't fast enough to get the job done. Since there was a great demand to be able to produce these kinds of applications without needing a supercomputer special chips (microprocessors) were designing specifically for doing these tasks. We called these microprocessors graphics processing units since they were processors designed for accelerating graphics related tasks (although that name wasn't actually coined until Nvidia introduced it in 1999). It is a general principal that if you design a chip to be able to do only one specific task (or a small number of tasks) you can make it very good at that task. CPUs are designing to do "everything reasonably well" while GPUs are designed to do "only graphics related tasks extremely well".
Not every PC needed a good GPU so they were sold on separate expansion cards that allowed consumers who wanted them to buy them separately and add them to a PC. These "graphics cards" contained a number of chips on a separate circuit board that plugged into the motherboard via PCI. One of these chips is the actual processor that does the calculations (GPU). You also have ram chips since GPUs have there own memory banks separate from the rest of the system called "dedicated video ram" or just "video ram" for short. Then there is the RAMDAC (random access memory digital to analog converter) which converts the digital data in video ram into an analog video signal (remember that back then all display systems were analog CRTs). There is also a rom chip that stores the bios (basic input/output system, also called firmware or microcode) for the GPU.
Like I mentioned earlier we didn't start using the term "GPU" until 1999. The GPU is a single chip (or set of chips in the case of early cards) out of many on the graphics card. As for graphics cards themselves for awhile every company had their own name for what they wanted to call these things. Video card (short for video display adapter card), graphics card, display card, video adapter, graphics board, display adapter, graphics adapter, and so on. Video card and graphics card became the most common terms for them. In the 90s they were often separated in 2D accelerators (also called 2D cards) and 3D accelerators (also called 3D cards) since video cards from that era often supported only 2D or 3D rendering acceleration, not both. The processors themselves were also referred to as 3D accelerators or 2D accelerators. Eventually chips that did both became popular and so those terms went away. In addition to this I should point out that early video chipsets and 2D/3D accelerators consisted of multiple chips (early voodoo card 3D accelerators consisted of 3 chips, a texture management unit, rasterizer, and triangle engine, and it requires a separate card to handle 2D rendering which in turn could have multiple chips). In fact the earliest processors that could be considered GPUs were used in mainframe and supercomputers in the 80s and consisted of dozens of chips. Over time integration took over and dozens quickly became less than a dozen and eventually "a couple". By the time GPUs hit the consumer level 3D accelerators were down to 2 or 3 chips for the processor. This eventually became 1 chip. And eventually 2D and 3D processors were merged into a single chip. Today all GPUs are single chip architectures that do both 2D and 3D acceleration.
Today the term "GPU" refers to a graphics processor that can implement an entire 2D and 3D graphics pipeline in hardware. The term was coined by Nvidia to market its new geforce graphics cards which were the first to be able to do that. You see to render 3D graphics a series of specific tasks has to be performed in order. The output of each task is the input for the next task. This is called a pipeline. So we refer to the chain of tasks needed to render 3D graphics as the "graphics pipeline". When 3D accelerators first came around in the mid 90s only some of these tasks could be performed by the graphics chip in hardware. The rest had to be done by the cpu (using software). Over time 3D accelerators became more and more advanced adding the ability to "hardware accelerate" more and more of these tasks until eventually the entire process (pipeline) could be done from start to finish on the graphics chip and the only thing the cpu had to do was start things off by sending the necessary data and kernels to the graphics chip. At this point we started called them GPUs. By the way the term "hardware accelerate" means that a specific piece of extra hardware in the system (in this case the GPU) is used to perform the task instead of the cpu. Since the specialized chip is supposed to be faster at it the term accelerate is used. This also has the benefit of freeing up the cpu to do other work since the work is being offloaded to the GPU.
Oh fuck I haven't even talked about the video systems used in early PCs from the 80s (MDA, CGA, EGA, VGA, VDC, VSR, CRTC, VIC, VDP/VDU) or how modern GPUs differ from 1999 GPUs (fixed function vs programmable). Or integrated graphics. Or laptops. Or SoCs. I should point out that video/graphics cards were around in the 1980s but we didn't have GPUs and we didn't usually call them video cards back then. They were really just video display adapters. They did not do any kind of 2D or 3D acceleration. The cpu did all of the rendering work and the display adapters just contained a framebuffer and some basic logic. Most home computers before the IBM PC had their video systems built into the motherboard instead of on an expansion card. And it usually consisted of multiple chips even though there was usually no acceleration with the cpu doing all of the text and image rendering. The IBM PC is really what made the video card idea the norm.
You might want to read through my min video card specs thread for some extra definitions:
http://forums.dolphin-emu.org/Thread-minimum-specs-video-card
I don't have time to write any more right now. I was going to talk about cpu vs. gpu architecture and the role of each in dolphin as well as PC games but I got sidetracked trying to establish the basic explanations and got caught writing about history to elaborate on those definitions. I have not proofread any this as I am out of time. At least I'm 1/3 done with something that will make a good post on my website.
Quote: without needing a supercomputer special chips (microprocessors) were designing specifically
Should be
Quote: without needing a supercomputer. Special chips (microprocessors) were designed specifically
Other than that, I think it's all good.
welly well its a gold mine of knowledge this will come in handy
I doubt it. I didn't even get to the interesting parts.
I gotta stop writing this stuff.