12-15-2011, 11:05 PM
12-16-2011, 05:07 AM
(12-15-2011, 11:05 PM)dannzen Wrote: [ -> ]and downscaling IS GAY
So you don't like AA :p
12-16-2011, 05:20 AM
(12-16-2011, 05:07 AM)ExtremeDude2 Wrote: [ -> ](12-15-2011, 11:05 PM)dannzen Wrote: [ -> ]and downscaling IS GAY
So you don't like AA :p
4xIR and AA?
gl hf

thats why everyone is complaining about fps
downscaling is always gay... fagotts
http://stackoverflow.com/questions/384991/what-is-the-best-image-downscaling-algorithm-quality-wise
the quality is always worse compaired to the original picture
your brain is downscaling every crap without quality lose...
but software needs to use mathemathics algorithms... which means quality loose..
just downscale and upscale a picture 2-3 times...
in short... pixels are merged together which means informations are lost... forever
12-16-2011, 07:25 AM
I know you're trolling but I'll bite just this once just in case you're not.
Please stop calling people faggots. This is at least the third time you've done it that I have seen. Also the link you posted doesn't express a negative opinion of downscaling.
......lol. No it isn't. In fact downscaling is the only way to achieve cinema quality cgi. Using a box filter with SSAA is still the single best way to eliminate every single type of artifacting known (geometry aliasing, texture aliasing, shadow aliasing, shader/specular aliasing, blurry textures, blocky textures, texture shimmering, non-uniform texture sharpness, etc.). Downscaling is the only way to display an image that has a higher resolution than your display without zooming in on a section of it. So you can either render the image at a lower resolution and capture less information from the scene or you can render the image at a higher resolution than your display and capture more information from the scene and use a downscaling filter to make it fit. As long as you use the right filter for the right situation downscaling significantly improves image quality. If you don't like it then you should stay away from movies with CGI, SSAA in video games, and nearly every single professional rendering application.
lolwut
Your brain does not downscale images, at least not in the sense of what we think of as downscaling.
Second of all your logic of "it's math therefore there must be a quality loss" makes no sense.
Third of all your brain would have to use a mathematical algorithm as well, the only alternative is magic.
Yeah.....information that would have to be removed anyways because the resolution of the image is too high to display the whole thing. And not every scaling algorithm blends pixels together.
Your point being? When are we going to do this? Dolphin either upscales the image if its resolution is lower than your output resolution BECAUSE IT HAS TO. Or downscales the image if its resolution is higher than your output resolution BECAUSE IT HAS TO. The only alternative to this is to use a fractional internal resolution (render the image at the same resolution as your desired output) and even that requires some downscaling/upscaling to correct the aspect ratio. Without scaling filters you wouldn't be able to watch videos or view images in fullscreen, change the aspect ratio of photos/videos, or render 3D content at a higher resolution than your display resolution. Downscaling is great for 3D rendering because it allows you to capture more information from the scene then you would normally be capable of if you were constrained by your display resolution.
Your "downscaling is bad" argument is too general. Which scaling method? What circumstance is it being used in?
Quote:downscaling is always gay... fagotts
Please stop calling people faggots. This is at least the third time you've done it that I have seen. Also the link you posted doesn't express a negative opinion of downscaling.
Quote:the quality is always worse compaired to the original picture
......lol. No it isn't. In fact downscaling is the only way to achieve cinema quality cgi. Using a box filter with SSAA is still the single best way to eliminate every single type of artifacting known (geometry aliasing, texture aliasing, shadow aliasing, shader/specular aliasing, blurry textures, blocky textures, texture shimmering, non-uniform texture sharpness, etc.). Downscaling is the only way to display an image that has a higher resolution than your display without zooming in on a section of it. So you can either render the image at a lower resolution and capture less information from the scene or you can render the image at a higher resolution than your display and capture more information from the scene and use a downscaling filter to make it fit. As long as you use the right filter for the right situation downscaling significantly improves image quality. If you don't like it then you should stay away from movies with CGI, SSAA in video games, and nearly every single professional rendering application.
Quote:your brain is downscaling every crap without quality lose...
but software needs to use mathemathics algorithms... which means quality loose..
lolwut
Your brain does not downscale images, at least not in the sense of what we think of as downscaling.
Second of all your logic of "it's math therefore there must be a quality loss" makes no sense.
Third of all your brain would have to use a mathematical algorithm as well, the only alternative is magic.
Quote:in short... pixels are merged together which means informations are lost... forever
Yeah.....information that would have to be removed anyways because the resolution of the image is too high to display the whole thing. And not every scaling algorithm blends pixels together.
Quote:just downscale and upscale a picture 2-3 times...
Your point being? When are we going to do this? Dolphin either upscales the image if its resolution is lower than your output resolution BECAUSE IT HAS TO. Or downscales the image if its resolution is higher than your output resolution BECAUSE IT HAS TO. The only alternative to this is to use a fractional internal resolution (render the image at the same resolution as your desired output) and even that requires some downscaling/upscaling to correct the aspect ratio. Without scaling filters you wouldn't be able to watch videos or view images in fullscreen, change the aspect ratio of photos/videos, or render 3D content at a higher resolution than your display resolution. Downscaling is great for 3D rendering because it allows you to capture more information from the scene then you would normally be capable of if you were constrained by your display resolution.
Your "downscaling is bad" argument is too general. Which scaling method? What circumstance is it being used in?
12-16-2011, 08:35 AM
Aliasing theory for dummies: (just because I'm bored right now)
Signal processing (without actually getting into signal processing):
1. Take an input (raster) image.
Geometric analogy: Take a 4-dimensional input vector (u1,u2,u3,u4)
2. Pick a filter to transform the input image. A filter is a mathematical function which does some fancy (or simple) math to produce an output image from the input image.
Geometric analogy: A 2x4 matrix (which will transform our 4-dimensional vector to a 2-dimensional one):
m11,m12,m13,m14
m21,m22,m23,m24
3. Apply the filter to your input image, the result is your output image.
Geometric analogy: Multiply the 2x4 matrix with our 4-dimensional vector (which is equivalent to a 4x1 matrix, i.e. the result will be a 2x1 matrix aka 2 dimensional vector):
Matrix*Vector=(m11*u1+m12*u2+m13*u3+m14*u4, m21*u1+m22*u2+m23*u3+m24*u4)
4. Notice how, depending on the type of filter used, the entropy of the output image is lower than the input image (omgz!). That means, quality was lost during the process, in layman terms: The image quality got poorer.
Geometric analogy: We've transformed a 4 dimensional vector to a 2 dimensional one. That means there's two degrees of freedom less. To put it simply: You can store more different vectors with 4 numbers than with merely 2. There's no way to uniquely reconstruct the input vector from the output vector. Just like with our images, information gets lost.
Signal processing during rasterization: (let's make things more abstract
)
1. Take some input 3D geometry (analog to the input image before)
For example: A group of 8 vertices representing a cube.
2. Run it through your vertex shader and apply some fancy transformations to it (doesn't really matter in this context, so ignore this if you have no idea about shaders)
For example: Rotate your cube by 45° around the x-axis.
Now let's assume you now want to somehow display the geometry to the user. But how do you do that?
3. You basically apply a filter which takes the input geometry and produces an output (raster) image from that (for those who care, this is what pixel shaders are doing
)
For example: Paint your cube as a group of pixels which are drawn blue.
4. Notice how we lost information again? In case you haven't, read some wikipedia article about aliasing. The problem is that our input geometry is "infinitely detailed", but we want to forcefully store all the information of the geometry in a raster image, which has a limited resolution. Anyway, I don't feel like explaining this further/properly now, so I just assume you understood this part
Point is, the geometry->image filter reduces entropy, i.e. loses information, i.e. reduces image quality.
Fixing the problem of aliasing:
1. what I said before almost always happens when trying to convert geometry to raster images, so aliasing can't really be fixed. It just can be improved to the extend where it's tolerable.
2. So how can we reduce aliasing? Easy, just increase the resolution of your output image!
3. "But.. but... Now my screen is smaller than the output buffer which contains the rendered image
" Nice, you almost understood the problem with this approach
4. Now you need to downscale the output image to your screen resolution, and that *drumroll* causes information loss as well. So it's basically a tradeoff, you either lose much information from converting geometry to a raster image, or you lose some information from converting geometry to a (bigger) raster image and lose some more information when downscaling it to your actual screen resolution.
------------------
So.. almost done now. Guess you now might know enough to understand my actual point:
Aliasing is caused by the information loss from running the input geometry->output image filter (aka pixel shader).
Aliasing is caused no matter how big your output image is, but the amount of information lost decreases with an increasing image size.
Information loss equals quality decrease.
The information you preserve from rendering to a bigger output image is decreased by the information loss when downscaling. As it happens, the remaining information is better than the remaining information without ssaa.
Fwiw, that's where the different downscaling filters come into play: By picking a smarter filter than a box filter, the information loss in the downscaling step is reduced further.
This strict separation between "quality loss when converting geometry to the output image" and "quality loss when downscaling the supersampled image to the output image".
Now that should be enough for today... I don't know anymore if this actually fits the topic, but I felt like posting this somewhere ;P
Don't get me started on fourier series and stuff, that'll end up in an even longer post
Signal processing (without actually getting into signal processing):
1. Take an input (raster) image.
Geometric analogy: Take a 4-dimensional input vector (u1,u2,u3,u4)
2. Pick a filter to transform the input image. A filter is a mathematical function which does some fancy (or simple) math to produce an output image from the input image.
Geometric analogy: A 2x4 matrix (which will transform our 4-dimensional vector to a 2-dimensional one):
m11,m12,m13,m14
m21,m22,m23,m24
3. Apply the filter to your input image, the result is your output image.
Geometric analogy: Multiply the 2x4 matrix with our 4-dimensional vector (which is equivalent to a 4x1 matrix, i.e. the result will be a 2x1 matrix aka 2 dimensional vector):
Matrix*Vector=(m11*u1+m12*u2+m13*u3+m14*u4, m21*u1+m22*u2+m23*u3+m24*u4)
4. Notice how, depending on the type of filter used, the entropy of the output image is lower than the input image (omgz!). That means, quality was lost during the process, in layman terms: The image quality got poorer.
Geometric analogy: We've transformed a 4 dimensional vector to a 2 dimensional one. That means there's two degrees of freedom less. To put it simply: You can store more different vectors with 4 numbers than with merely 2. There's no way to uniquely reconstruct the input vector from the output vector. Just like with our images, information gets lost.
Signal processing during rasterization: (let's make things more abstract
)1. Take some input 3D geometry (analog to the input image before)
For example: A group of 8 vertices representing a cube.
2. Run it through your vertex shader and apply some fancy transformations to it (doesn't really matter in this context, so ignore this if you have no idea about shaders)
For example: Rotate your cube by 45° around the x-axis.
Now let's assume you now want to somehow display the geometry to the user. But how do you do that?
3. You basically apply a filter which takes the input geometry and produces an output (raster) image from that (for those who care, this is what pixel shaders are doing
)For example: Paint your cube as a group of pixels which are drawn blue.
4. Notice how we lost information again? In case you haven't, read some wikipedia article about aliasing. The problem is that our input geometry is "infinitely detailed", but we want to forcefully store all the information of the geometry in a raster image, which has a limited resolution. Anyway, I don't feel like explaining this further/properly now, so I just assume you understood this part

Point is, the geometry->image filter reduces entropy, i.e. loses information, i.e. reduces image quality.
Fixing the problem of aliasing:
1. what I said before almost always happens when trying to convert geometry to raster images, so aliasing can't really be fixed. It just can be improved to the extend where it's tolerable.
2. So how can we reduce aliasing? Easy, just increase the resolution of your output image!
3. "But.. but... Now my screen is smaller than the output buffer which contains the rendered image
" Nice, you almost understood the problem with this approach4. Now you need to downscale the output image to your screen resolution, and that *drumroll* causes information loss as well. So it's basically a tradeoff, you either lose much information from converting geometry to a raster image, or you lose some information from converting geometry to a (bigger) raster image and lose some more information when downscaling it to your actual screen resolution.
------------------
So.. almost done now. Guess you now might know enough to understand my actual point:
Aliasing is caused by the information loss from running the input geometry->output image filter (aka pixel shader).
Aliasing is caused no matter how big your output image is, but the amount of information lost decreases with an increasing image size.
Information loss equals quality decrease.
The information you preserve from rendering to a bigger output image is decreased by the information loss when downscaling. As it happens, the remaining information is better than the remaining information without ssaa.
Fwiw, that's where the different downscaling filters come into play: By picking a smarter filter than a box filter, the information loss in the downscaling step is reduced further.
This strict separation between "quality loss when converting geometry to the output image" and "quality loss when downscaling the supersampled image to the output image".
Now that should be enough for today... I don't know anymore if this actually fits the topic, but I felt like posting this somewhere ;P
Don't get me started on fourier series and stuff, that'll end up in an even longer post

12-16-2011, 08:13 PM
It would seem that the word 'dummies; from where i come from and where you come from are very different
but dont worry i got your final inference
but dont worry i got your final inference12-16-2011, 08:16 PM
Uh yeah, I forgot to shout a big WTF at this:
(12-16-2011, 07:25 AM)NaturalViolence Wrote: [ -> ]Using a box filter with SSAA is still the single best way to eliminate every single type of artifacting knownThe box filter is one of the worst ones available, about any other filter available is better regarding information preservation...
12-17-2011, 04:53 AM
Ok let me put that a different way. In video games and 3D rendering in general the best way to eliminate artifacts is to use SSAA, and SSAA uses a box filter. Keep in mind that since SSAA also scales internal resolution by an equal integer in both directions that a box filter actually works reasonably well in this circumstance. If you've already rendered an image and you want to scale it to some arbitrary resolution then yes, it would be a bad choice.
12-17-2011, 05:13 AM
You both make valid points and know what you talk about. I believe you both.
12-17-2011, 05:27 AM
and now my question...
4x IR 0xAA
3x IR 4xAA++
or is it possible to use 4x IR with AA?
can the highend models handel them and perform with 60fps??
and sorry i was a little bit trolling
but he said that SSAA caused glitches...
thats why he is using 4x IR :> i forgot that
4x IR 0xAA
3x IR 4xAA++
Quote:In video games and 3D rendering in general the best way to eliminate artifacts is to use SSAA, and SSAA uses a box filter.
or is it possible to use 4x IR with AA?
can the highend models handel them and perform with 60fps??
and sorry i was a little bit trolling
but he said that SSAA caused glitches...
thats why he is using 4x IR :> i forgot that
(12-16-2011, 05:20 AM)dannzen Wrote: [ -> ](12-16-2011, 05:07 AM)ExtremeDude2 Wrote: [ -> ]4xIR and AA?(12-15-2011, 11:05 PM)dannzen Wrote: [ -> ]and downscaling IS GAYSo you don't like AA :p
gl hf
thats why everyone is complaining about fps