Dolphin, the GameCube and Wii emulator - Forums

Full Version: Why asynchronous audio should not have been removed....and an idea.
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2 3 4 5 6
I want to point out that this is a fairly legitimate explanation as to why Asychronous audio is very important to the platform...I find it easier to play games using Asynchronous audio when using any emulator of consoles that used optical discs. The truth of the matter is this. Accurately emulating hardware that used optical disc based media (ie CD's and DVD's and mini-DVD's...) depends upon asynchronous audio to make games playable. That was part of Nintendo's focus on the GameCube..to make a powerful, efficient, fun to play and easy to develop for console. I want to point out that this is meant as a constructive criticism and not a rant or complaint. I have many ideas as to why it seemed so buggy to use asynchronous audio. I do not mean to sound condescending as well.

I am going to be brutally honest here:
It seems that people are so focused on accuracy in Dolphin 4.0's many revisions that they have lost sight on balancing accuracy with playability and speed. Constantly renaming the functions of everything does not help (more on that in a moment). I have heard the excuse that since asynchronous audio has been taken away because Asynchronous audio was buggy in the newer builds (welcome to Windows 8.1 borking everything early adopters), Dolphin has become "more accurate" in emulation towards the GameCube and Wii. This cannot be farther from the truth than saying that the grass is blue. The problem is that the GameCube and Wii both use a variable bit-rate audio format rather than what normal Audio CD's typically used (meaning Audio CD's use the synchronous continual bit-rate format). In this case, it means that VBR formatting allowed the special effects audio to synchronize with the XFB and EFB video input and output in the console...but allowed asynchronous playing of music on a separate thread. This freed up a lot of computing power requirements in the console's somewhat limited (but very powerful and extremely scalable) CPU and platform which is based around the IBM Power architecture (The very same architecture used in the Macintosh G4). I do not see how it is more accurate to use synchronous audio as more accurate to the console because it is impossible to make VBR audio synchronous at all without making things skip.

A Little history will help understand that issue I have with this:
Emulating an SNES accurately requires audio synchronization with the game's localized programming because the SNES used the same exact CPU as the Apple IIgs. Part of emulating the MOS 6502CM Architecture...which could use an expansion slot (we called that the Cartridge slot on the SNES..don't believe me? How do you think it was possible to use the SuperFX chip?) to add a faster processor (TransWarp GS). The SNES was a prime example of needing asynchronous audio to match you region's TV format because if you did not get it right, audio would crackle and pop. Before the SNES, frame rates got pretty darn choppy and audio when a person used a PAL version of a game (The Megadrive/Genesis and the NES are notorious for this). Sega actually lowered the CPU clock in the Megadrive (PAL Genesis) to match timing of the PAL system, while Nintendo lowered the frame rate..play a PAL version of Kirby's Adventure o an emulator and compare it to an NTSC timed version...you will see a difference). These problems were cleared out in the Sony Playstation because of the use of VBR asynchronous audio.

Why things are so buggy and how to possible fix these issues:
1. Part of the problem is that each version of Dolphin seems to have a different standardized naming system and core for all the revisions to follow in their respective core versions. The issue is that each version ( 3.0, 3.5, 4.0) had some things moved around and renamed in the core code. Part of the code get's lost when trying to conform to new standards by renaming and rearranging things.
To fix this, do not change naming standards.

2. The reason HLE DSP emulation with Asynchronous audio may have been borked has several causes. First, please take note that GameCube discs and Wii discs actually use Constant Angular Velocity to read data...This alone is the entire reason why asynchronous audio should be not only be put back in into Dolphin 4.x's core, but be on by default due to make the process accurate. I will explain the timing and several ideas on how to fix this issue in a moment. Right now I have to explain Constant Angular Velocity (CAV) and Asynchronous audio so that we are all on the same page.

CAV works in a funny way. It works by physically keeping the spindle speed at a constant rate while while the speed at which the data is written into RAM or Cache or frame buffer..constantly changes. This means that the data being output is in fact "Asynchronous" to whatever is needed at that moment in time

"Asynchronous audio" refers to the DSP reading data at a variable rate and then calculating the various write (input) speeds (rate) at which audio samples are read (output) to the internal frame buffers..Internal frames are measured in microseconds (ie...64ms for monitors using 60Hz is a great buffer size for NTSC with an internal rate of 32,090...the native internal audio sample rate of the SNES for example...is by far the best choice of getting the VPS speed of 60) in speed, and sample rate, and sending them into the frame buffer, and synchronizing them and sending them as output to be displayed by the external frame buffer.
The solution is mindbogglingly simple...For each "input rate" sample of audio being generated by the GameCube, Playback Output samples should also be produced... but internally by the DSP HLE ROM. The specific internal sample rate has no baring on the external audio sample rate that we hear through our speakers as output, but it does affect timing in a huge way. If the internal sample rate is too low compared to the audio buffer in size (in ms), you get erratic frame rates and audio skipping. If it is too high, you get pops and crackles. This rate must match the timing of the VPS at all times in order to get asynchronous audio working properly and smooth frame rates.. Asynchronous audio is where the game's audio is tied to the internal input/output rate of the internal frame buffer, and synchronized to the internal speed of the console being emulated.. Simply put...Fixed audio timing (synchronous audio timing) is not the same as synchronizing a game using audio (asynchronous audio). Maybe we should add controls to the DSP section that allow one to control the internal sample rate with a slider and allow one to control frame buffer sizes like the ones in current builds of SNES9x. While in the core properties add a function to disable "fixed audio timing" and options to enable (Sync Using Audio) like we see in the specific ROM settings in Project64.

The point to this mess of a post I make is this....The use of Asynchronous audio is important to accuracy because that is how discs made using Constant Angular Velocity must function in order to read any data off the disc. All CD players use a sort of buffer between what is being read and what is being put out for us to hear. The CAV standard merely reads data inconsistently. Because Asynchronous audio is inconsistent to the buffering of audio data and synchronized to the input rate, logic suggests that the two act the same. We physically see the disc being spun at a constant rate using CAV, but the data is read from the disc in an inconsistent manner. Similarly, with Asynchronous audio we physically hear consistent streaming of audio...but that audio written as input from the console is read inconsistently.
hah.
You clearly have no idea what you're talking about and I'm not even sure why I'm wasting my time replying to such a mess of backseat programming based on no factual information. But here we go.

(03-30-2014, 05:55 AM)Wally123 Wrote: [ -> ]I want to point out that this is a fairly legitimate explanation as to why Asychronous audio is very important to the platform...I find it easier to play games using Asynchronous audio when using any emulator of consoles that used optical discs. The truth of the matter is this. Accurately emulating hardware that used optical disc based media (ie CD's and DVD's and mini-DVD's...) depends upon asynchronous audio to make games playable. That was part of Nintendo's focus on the GameCube..to make a powerful, efficient, fun to play and easy to develop for console. I want to point out that this is meant as a constructive criticism and not a rant or complaint. I have many ideas as to why it seemed so buggy to use asynchronous audio. I do not mean to sound condescending as well.

See, this is already starting so badly. How do optical medias have anything to do with DSP emulation timeslices? DSP processing is done exclusively from RAM to RAM with no involvement of any other form of data storage.

(03-30-2014, 05:55 AM)Wally123 Wrote: [ -> ]I am going to be brutally honest here:
It seems that people are so focused on accuracy in Dolphin 4.0's many revisions that they have lost sight on balancing accuracy with playability and speed.

No we haven't, and I think the tradeoffs we've been doing have been pretty good. Case in point:
  • Several games that required the software renderer to work properly can now work with hardware rendering.
  • Several games that required DSPLLE to run without regular freezes can now run with DSPHLE and work on computers two generations older.
  • CPU requirements actually mostly went down since the release of Dolphin 4.0.

(03-30-2014, 05:55 AM)Wally123 Wrote: [ -> ]Constantly renaming the functions of everything does not help (more on that in a moment).

I have no idea what you are talking about. Like, really.

(03-30-2014, 05:55 AM)Wally123 Wrote: [ -> ]I have heard the excuse that since asynchronous audio has been taken away because Asynchronous audio was buggy in the newer builds (welcome to Windows 8.1 borking everything early adopters)

Operating systems have nothing to do with this.

(03-30-2014, 05:55 AM)Wally123 Wrote: [ -> ]Dolphin has become "more accurate" in emulation towards the GameCube and Wii.

It has, read:

http://blog.lse.epita.fr/articles/38-emu...lphin.html
http://blog.delroth.net/2013/07/why-dolp...rocessing/

(03-30-2014, 05:55 AM)Wally123 Wrote: [ -> ]This cannot be farther from the truth than saying that the grass is blue. The problem is that the GameCube and Wii both use a variable bit-rate audio format

That is not true, the GameCube and Wii DSP handle 3 different audio formats: PCM8, PCM16, ADPCM16. These three formats have a fixed bitrate.

(03-30-2014, 05:55 AM)Wally123 Wrote: [ -> ]rather than what normal Audio CD's typically used (meaning Audio CD's use the synchronous continual bit-rate format). In this case, it means that VBR formatting allowed the special effects audio to synchronize with the XFB and EFB video input and output in the console...but allowed asynchronous playing of music on a separate thread. This freed up a lot of computing power requirements in the console's somewhat limited (but very powerful and extremely scalable) CPU and platform which is based around the IBM Power architecture (The very same architecture used in the Macintosh G4). I do not see how it is more accurate to use synchronous audio as more accurate to the console because it is impossible to make VBR audio synchronous at all without making things skip.

I started replying to that point by point but it's just so wrong I'll just abort and laugh at it.

(03-30-2014, 05:55 AM)Wally123 Wrote: [ -> ]A Little history will help understand that issue I have with this:
Emulating an SNES accurately requires audio synchronization with the game's localized programming because the SNES used the same exact CPU as the Apple IIgs. Part of emulating the MOS 6502CM Architecture...which could use an expansion slot (we called that the Cartridge slot on the SNES..don't believe me? How do you think it was possible to use the SuperFX chip?) to add a faster processor (TransWarp GS). The SNES was a prime example of needing asynchronous audio to match you region's TV format because if you did not get it right, audio would crackle and pop. Before the SNES, frame rates got pretty darn choppy and audio when a person used a PAL version of a game (The Megadrive/Genesis and the NES are notorious for this). Sega actually lowered the CPU clock in the Megadrive (PAL Genesis) to match timing of the PAL system, while Nintendo lowered the frame rate..play a PAL version of Kirby's Adventure o an emulator and compare it to an NTSC timed version...you will see a difference). These problems were cleared out in the Sony Playstation because of the use of VBR asynchronous audio.

Cool, this has nothing to do with Dolphin at all.

(03-30-2014, 05:55 AM)Wally123 Wrote: [ -> ]Why things are so buggy and how to possible fix these issues:
1. Part of the problem is that each version of Dolphin seems to have a different standardized naming system and core for all the revisions to follow in their respective core versions. The issue is that each version ( 3.0, 3.5, 4.0) had some things moved around and renamed in the core code. Part of the code get's lost when trying to conform to new standards by renaming and rearranging things.
To fix this, do not change naming standards.

Oh god, the code is changing! Stop changing the code, it will solve all problems!

You are an idiot.

(03-30-2014, 05:55 AM)Wally123 Wrote: [ -> ]2. The reason HLE DSP emulation with Asynchronous audio may have been borked has several causes. First, please take note that GameCube discs and Wii discs actually use Constant Angular Velocity to read data...This alone is the entire reason why asynchronous audio should be not only be put back in into Dolphin 4.x's core, but be on by default due to make the process accurate. I will explain the timing and several ideas on how to fix this issue in a moment. Right now I have to explain Constant Angular Velocity (CAV) and Asynchronous audio so that we are all on the same page.

CAV works in a funny way. It works by physically keeping the spindle speed at a constant rate while while the speed at which the data is written into RAM or Cache or frame buffer..constantly changes. This means that the data being output is in fact "Asynchronous" to whatever is needed at that moment in ticme

Blablabla irrelevant bullshit (that is also not true, but who cares).

(03-30-2014, 05:55 AM)Wally123 Wrote: [ -> ]"Asynchronous audio" refers to the DSP reading data at a variable rate

Oh god, that's almost right!

(03-30-2014, 05:55 AM)Wally123 Wrote: [ -> ]and then calculating the various write (input) speeds (rate) at which audio samples are read (output) to the internal frame buffers.

Welp, so much for being right.

(03-30-2014, 05:55 AM)Wally123 Wrote: [ -> ]Internal frames are measured in microseconds (ie...64ms for monitors using 60Hz is a great buffer size for NTSC with an internal rate of 32,090...the native internal audio sample rate of the SNES for example...is by far the best choice of getting the VPS speed of 60) in speed, and sample rate, and sending them into the frame buffer, and synchronizing them and sending them as output to be displayed by the external frame buffer.

There is no notion of "internal frames" for audio in a generic way on GameCube. The DSP is a programmable CPU which runs audio emulation code provided by the game. In practice, there are 3 variants of audio emulation code: AX, AXWii, Zelda. AX uses 5ms internal frames, AXWii uses 3ms internal frames, Zelda uses 5ms internal frames. We do not get to choose that.

(03-30-2014, 05:55 AM)Wally123 Wrote: [ -> ]The solution is mindbogglingly simple...For each "input rate" sample of audio being generated by the GameCube, Playback Output samples should also be produced... but internally by the DSP HLE ROM.

This breaks because audio data goes back and forth between CPU and DSP sometimes 2 to 3 times. You cannot have larger "internal" HLE buffers if you want to keep this behavior, which is required for a lot of sound effects.

(03-30-2014, 05:55 AM)Wally123 Wrote: [ -> ]The specific internal sample rate has no baring on the external audio sample rate that we hear through our speakers as output, but it does affect timing in a huge way. If the internal sample rate is too low compared to the audio buffer in size (in ms), you get erratic frame rates and audio skipping. If it is too high, you get pops and crackles. This rate must match the timing of the VPS at all times in order to get asynchronous audio working properly and smooth frame rates.. Asynchronous audio is where the game's audio is tied to the internal input/output rate of the internal frame buffer, and synchronized to the internal speed of the console being emulated.. Simply put...Fixed audio timing (synchronous audio timing) is not the same as synchronizing a game using audio (asynchronous audio). Maybe we should add controls to the DSP section that allow one to control the internal sample rate with a slider and allow one to control frame buffer sizes like the ones in current builds of SNES9x. While in the core properties add a function to disable "fixed audio timing" and options to enable (Sync Using Audio) like we see in the specific ROM settings in Project64.

Wrong in too many ways to try and explain, sorry.

(03-30-2014, 05:55 AM)Wally123 Wrote: [ -> ]The point to this mess of a post I make is this....The use of Asynchronous audio is important to accuracy because that is how discs made using Constant Angular Velocity must function in order to read any data off the disc. All CD players use a sort of buffer between what is being read and what is being put out for us to hear. The CAV standard merely reads data inconsistently. Because Asynchronous audio is inconsistent to the buffering of audio data and synchronized to the input rate, logic suggests that the two act the same. We physically see the disc being spun at a constant rate using CAV, but the data is read from the disc in an inconsistent manner. Similarly, with Asynchronous audio we physically hear consistent streaming of audio...but that audio written as input from the console is read inconsistently.

The point I'm trying to make is that you have no idea what you are talking about and you should avoid trying to teach us how to do our job. Go fork the project if you are not happy about the direction in which we're going.

Let's play a little game: if your next post contains any factual mistake regarding how GameCube DSP audio processing works, I'll just ban you without even bothering to reply.
The one thing I'd like to know, is that with all the data and documentation available for the GameCube and Wii, that all the reverse engineering, testing, and everything, how exactly did OP manage to come up with this conclusion.
No need to be rude. I should inform you that I do have Asperger's Syndrome and I am the type that sometimes over-explains things quite often. I admit to knowing nothing about how the DSP works...but I do know that for some reason that without Asynchronous audio...minor FPS count drops cause the audio to stutter almost instantly when playing games that lock that FPS count and VPS count together (yes I do see the VPS count in the Title bar of the games I run). I don't know why it does that.
I believe you were the rude one waltzing in here saying that you knew exactly what was wrong without knowing a lick about the DSP. Seriously, if you're going to "overexplain" something, at least understand the bare basics of what you're explaining.
When the game slows down, ever, because Dolphin is an emulator, emulation slows down. Synchronous audio makes it so that this slowdown will break up the audio. That is why.
[Image: be7.jpg]

*So* many completely incorrect assumptions, I don't even know where to begin. I can only say that if you'd actually paid some real amount of attention to the changes in the DSP code (in addition to knowing anything about how the DSP worked), you wouldn't have come to any of these insane conclusions. Comparing everything to how other consoles work is also mostly invalid – very little logic is universally applicable when it comes to the low-level details of video game console hardware.

I especially enjoyed the "do not change naming standards" part, that's like living in a house for a decade and shitting on the floor everywhere and never cleaning anything up. It makes life much harder and you're liable to get some sort of medical condition from it. Code cleanup is a vital part of development, you can't leave ugly code lying around.
(03-30-2014, 07:54 AM)JMC47 Wrote: [ -> ]I believe you were the rude one waltzing in here saying that you knew exactly what was wrong without knowing a lick about the DSP. Seriously, if you're going to "overexplain" something, at least understand the bare basics of what you're explaining.

I admit to and apologize for being rude. I do not ever mean to be. I will shorten everything by turning it into a question...


Why can't someone maybe submit a revision that adds a functional check box to the DSP menu or the game specific core (game properties) properties menu to enable or disable asynchronous audio? That way certain users can turn it off..while certain other users who are having issues can turn it on.
Yeah, I think the OP got the point, so everyone may stop bullying someone who tried to bring up a topic he deemed important. I'll delete and warn off any following posts like that.
Pages: 1 2 3 4 5 6