(11-19-2018, 11:36 AM)SCOTT0852 Wrote: [ -> ]If you're looking for high-poly models, unfortunately that's totally impossible. It's still a GCN/Wii, so you can't do things that they couldn't (aside from overclocking the cpu). You can get high-res texture packs though. I'd recommend cranking the IR as high as your computer can handle (but not higher than your monitor's resolution) and using an HD texture pack for the best look, depending on the game maybe increase something like AA or adding shaders.
(11-19-2018, 04:01 PM)MayImilae Wrote: [ -> ]Project M did replacement models, so no. It's just hard. REALLY hard. And requires modifying the game ISO.
There is no reason to AI anything here. Tessellation is already designed to use data from higher polygon models to allow for actually adding detail. That said, there's no way to integrate that into GameCube games without rebuilding the game's engine, and that is not something Dolphin can do.
Besides, if you already have better models built, then just swap them in by editing the ISO. There's no need for AI or tesselation or game engine modding or anything! It just all comes down to the labor cost of making high quality 3D models and assets. It's just hard, and no one is willing to do that. I'm certainly not going to spend hundreds of hours on that.
My reply was more of an academic answer than a practical one.
In other words, I guess you could ask:
Can you do to vertex data what anti-aliasing does to pixel data?
Can you automatically subdivide vertices in a way that is generally pleasing to the human eye
and does something akin to 2D upscalers that detect known shapes from withing pixellated messes?
I doubt you can do it in real-time but an offline baking program might be able to do it.
I guess I just threw in the AI Learning part because that's how things are done these days but it's
even possible that a bunch of clever math could achieve this geometric upscaler approach.
nb0hr1more Wrote:Can you do to vertex data what anti-aliasing does to pixel data?
That's just subdivision, and it leads to a large losses of detail when applied to low polygon models like this. On a 3000-5000 poly character a lot of detail is conveyed with single polygons and even single verts and their precise placement, and subdivision destroys all of that.
(11-19-2018, 04:53 PM)MayImilae Wrote: [ -> ]That's just subdivision, and it leads to a large losses of detail when applied to low polygon models like this. On a 3000-5000 poly character a lot of detail is conveyed with single polygons and even single verts and their precise placement, and subdivision destroys all of that.
True.
The same can also be said of very very low pixel count images. Upscaling will be destructive if there isn't enough sample data.
I guess subdivision losses might still be preferred if paired with some sort of blur to produce an impressionistic result
akin to what the Gamecube does on a CRT.
(11-20-2018, 01:35 PM)nbohr1more Wrote: [ -> ]Here we go:
https://www.youtube.com/watch?v=LeuX963jpn8
We can't change the models. Well, we can with iso mods, but we can't do much more than the GCN/Wii could without experiencing similar slowdown. We can overclock the CPU, but that doesn't matter since graphics are on the GPU.
(11-21-2018, 06:13 PM)SCOTT0852 Wrote: [ -> ]We can't change the models. Well, we can with iso mods, but we can't do much more than the GCN/Wii could without experiencing similar slowdown. We can overclock the CPU, but that doesn't matter since graphics are on the GPU.
Well, as most devs have said multiple times, Dolphin is emulating an "infinitely" fast GPU and thus it normally isn“t the bottleneck during playtime...
Ishiiruku can already inject geometry via Displacement and Tessellation so it doesn't seem out of the question
as a purely backend manipulation of the data. Though, there are probably games where the geometry data is
manipulated on the CPU after delivery to the GPU by way of some sort of early\proto transform feedback loop in
the VRAM. In those cases, procedurally creating new geometry would be pathological for performance.
The video was a hint that someone had created an automatic algorithm that universally works in an offline\baked context
and (again) leads to questions like "could something similar be done in real-time" or (if not) "could this operation be
added as a precompile step similar to what the ubershaders "compile before start" feature does?"
I guess this type of discussion is what you'd pester Tino about in the Ishiiruka thread
(similar to his long discussions about 2D texture upscaling tech and what is practical from a real-time perspective).