I want to point out that this is a fairly legitimate explanation as to why Asychronous audio is very important to the platform...I find it easier to play games using Asynchronous audio when using any emulator of consoles that used optical discs. The truth of the matter is this. Accurately emulating hardware that used optical disc based media (ie CD's and DVD's and mini-DVD's...) depends upon asynchronous audio to make games playable. That was part of Nintendo's focus on the GameCube..to make a powerful, efficient, fun to play and easy to develop for console. I want to point out that this is meant as a constructive criticism and not a rant or complaint. I have many ideas as to why it seemed so buggy to use asynchronous audio. I do not mean to sound condescending as well.
I am going to be brutally honest here:
It seems that people are so focused on accuracy in Dolphin 4.0's many revisions that they have lost sight on balancing accuracy with playability and speed. Constantly renaming the functions of everything does not help (more on that in a moment). I have heard the excuse that since asynchronous audio has been taken away because Asynchronous audio was buggy in the newer builds (welcome to Windows 8.1 borking everything early adopters), Dolphin has become "more accurate" in emulation towards the GameCube and Wii. This cannot be farther from the truth than saying that the grass is blue. The problem is that the GameCube and Wii both use a variable bit-rate audio format rather than what normal Audio CD's typically used (meaning Audio CD's use the synchronous continual bit-rate format). In this case, it means that VBR formatting allowed the special effects audio to synchronize with the XFB and EFB video input and output in the console...but allowed asynchronous playing of music on a separate thread. This freed up a lot of computing power requirements in the console's somewhat limited (but very powerful and extremely scalable) CPU and platform which is based around the IBM Power architecture (The very same architecture used in the Macintosh G4). I do not see how it is more accurate to use synchronous audio as more accurate to the console because it is impossible to make VBR audio synchronous at all without making things skip.
A Little history will help understand that issue I have with this:
Emulating an SNES accurately requires audio synchronization with the game's localized programming because the SNES used the same exact CPU as the Apple IIgs. Part of emulating the MOS 6502CM Architecture...which could use an expansion slot (we called that the Cartridge slot on the SNES..don't believe me? How do you think it was possible to use the SuperFX chip?) to add a faster processor (TransWarp GS). The SNES was a prime example of needing asynchronous audio to match you region's TV format because if you did not get it right, audio would crackle and pop. Before the SNES, frame rates got pretty darn choppy and audio when a person used a PAL version of a game (The Megadrive/Genesis and the NES are notorious for this). Sega actually lowered the CPU clock in the Megadrive (PAL Genesis) to match timing of the PAL system, while Nintendo lowered the frame rate..play a PAL version of Kirby's Adventure o an emulator and compare it to an NTSC timed version...you will see a difference). These problems were cleared out in the Sony Playstation because of the use of VBR asynchronous audio.
Why things are so buggy and how to possible fix these issues:
1. Part of the problem is that each version of Dolphin seems to have a different standardized naming system and core for all the revisions to follow in their respective core versions. The issue is that each version ( 3.0, 3.5, 4.0) had some things moved around and renamed in the core code. Part of the code get's lost when trying to conform to new standards by renaming and rearranging things.
To fix this, do not change naming standards.
2. The reason HLE DSP emulation with Asynchronous audio may have been borked has several causes. First, please take note that GameCube discs and Wii discs actually use Constant Angular Velocity to read data...This alone is the entire reason why asynchronous audio should be not only be put back in into Dolphin 4.x's core, but be on by default due to make the process accurate. I will explain the timing and several ideas on how to fix this issue in a moment. Right now I have to explain Constant Angular Velocity (CAV) and Asynchronous audio so that we are all on the same page.
CAV works in a funny way. It works by physically keeping the spindle speed at a constant rate while while the speed at which the data is written into RAM or Cache or frame buffer..constantly changes. This means that the data being output is in fact "Asynchronous" to whatever is needed at that moment in time
"Asynchronous audio" refers to the DSP reading data at a variable rate and then calculating the various write (input) speeds (rate) at which audio samples are read (output) to the internal frame buffers..Internal frames are measured in microseconds (ie...64ms for monitors using 60Hz is a great buffer size for NTSC with an internal rate of 32,090...the native internal audio sample rate of the SNES for example...is by far the best choice of getting the VPS speed of 60) in speed, and sample rate, and sending them into the frame buffer, and synchronizing them and sending them as output to be displayed by the external frame buffer.
The solution is mindbogglingly simple...For each "input rate" sample of audio being generated by the GameCube, Playback Output samples should also be produced... but internally by the DSP HLE ROM. The specific internal sample rate has no baring on the external audio sample rate that we hear through our speakers as output, but it does affect timing in a huge way. If the internal sample rate is too low compared to the audio buffer in size (in ms), you get erratic frame rates and audio skipping. If it is too high, you get pops and crackles. This rate must match the timing of the VPS at all times in order to get asynchronous audio working properly and smooth frame rates.. Asynchronous audio is where the game's audio is tied to the internal input/output rate of the internal frame buffer, and synchronized to the internal speed of the console being emulated.. Simply put...Fixed audio timing (synchronous audio timing) is not the same as synchronizing a game using audio (asynchronous audio). Maybe we should add controls to the DSP section that allow one to control the internal sample rate with a slider and allow one to control frame buffer sizes like the ones in current builds of SNES9x. While in the core properties add a function to disable "fixed audio timing" and options to enable (Sync Using Audio) like we see in the specific ROM settings in Project64.
The point to this mess of a post I make is this....The use of Asynchronous audio is important to accuracy because that is how discs made using Constant Angular Velocity must function in order to read any data off the disc. All CD players use a sort of buffer between what is being read and what is being put out for us to hear. The CAV standard merely reads data inconsistently. Because Asynchronous audio is inconsistent to the buffering of audio data and synchronized to the input rate, logic suggests that the two act the same. We physically see the disc being spun at a constant rate using CAV, but the data is read from the disc in an inconsistent manner. Similarly, with Asynchronous audio we physically hear consistent streaming of audio...but that audio written as input from the console is read inconsistently.
I am going to be brutally honest here:
It seems that people are so focused on accuracy in Dolphin 4.0's many revisions that they have lost sight on balancing accuracy with playability and speed. Constantly renaming the functions of everything does not help (more on that in a moment). I have heard the excuse that since asynchronous audio has been taken away because Asynchronous audio was buggy in the newer builds (welcome to Windows 8.1 borking everything early adopters), Dolphin has become "more accurate" in emulation towards the GameCube and Wii. This cannot be farther from the truth than saying that the grass is blue. The problem is that the GameCube and Wii both use a variable bit-rate audio format rather than what normal Audio CD's typically used (meaning Audio CD's use the synchronous continual bit-rate format). In this case, it means that VBR formatting allowed the special effects audio to synchronize with the XFB and EFB video input and output in the console...but allowed asynchronous playing of music on a separate thread. This freed up a lot of computing power requirements in the console's somewhat limited (but very powerful and extremely scalable) CPU and platform which is based around the IBM Power architecture (The very same architecture used in the Macintosh G4). I do not see how it is more accurate to use synchronous audio as more accurate to the console because it is impossible to make VBR audio synchronous at all without making things skip.
A Little history will help understand that issue I have with this:
Emulating an SNES accurately requires audio synchronization with the game's localized programming because the SNES used the same exact CPU as the Apple IIgs. Part of emulating the MOS 6502CM Architecture...which could use an expansion slot (we called that the Cartridge slot on the SNES..don't believe me? How do you think it was possible to use the SuperFX chip?) to add a faster processor (TransWarp GS). The SNES was a prime example of needing asynchronous audio to match you region's TV format because if you did not get it right, audio would crackle and pop. Before the SNES, frame rates got pretty darn choppy and audio when a person used a PAL version of a game (The Megadrive/Genesis and the NES are notorious for this). Sega actually lowered the CPU clock in the Megadrive (PAL Genesis) to match timing of the PAL system, while Nintendo lowered the frame rate..play a PAL version of Kirby's Adventure o an emulator and compare it to an NTSC timed version...you will see a difference). These problems were cleared out in the Sony Playstation because of the use of VBR asynchronous audio.
Why things are so buggy and how to possible fix these issues:
1. Part of the problem is that each version of Dolphin seems to have a different standardized naming system and core for all the revisions to follow in their respective core versions. The issue is that each version ( 3.0, 3.5, 4.0) had some things moved around and renamed in the core code. Part of the code get's lost when trying to conform to new standards by renaming and rearranging things.
To fix this, do not change naming standards.
2. The reason HLE DSP emulation with Asynchronous audio may have been borked has several causes. First, please take note that GameCube discs and Wii discs actually use Constant Angular Velocity to read data...This alone is the entire reason why asynchronous audio should be not only be put back in into Dolphin 4.x's core, but be on by default due to make the process accurate. I will explain the timing and several ideas on how to fix this issue in a moment. Right now I have to explain Constant Angular Velocity (CAV) and Asynchronous audio so that we are all on the same page.
CAV works in a funny way. It works by physically keeping the spindle speed at a constant rate while while the speed at which the data is written into RAM or Cache or frame buffer..constantly changes. This means that the data being output is in fact "Asynchronous" to whatever is needed at that moment in time
"Asynchronous audio" refers to the DSP reading data at a variable rate and then calculating the various write (input) speeds (rate) at which audio samples are read (output) to the internal frame buffers..Internal frames are measured in microseconds (ie...64ms for monitors using 60Hz is a great buffer size for NTSC with an internal rate of 32,090...the native internal audio sample rate of the SNES for example...is by far the best choice of getting the VPS speed of 60) in speed, and sample rate, and sending them into the frame buffer, and synchronizing them and sending them as output to be displayed by the external frame buffer.
The solution is mindbogglingly simple...For each "input rate" sample of audio being generated by the GameCube, Playback Output samples should also be produced... but internally by the DSP HLE ROM. The specific internal sample rate has no baring on the external audio sample rate that we hear through our speakers as output, but it does affect timing in a huge way. If the internal sample rate is too low compared to the audio buffer in size (in ms), you get erratic frame rates and audio skipping. If it is too high, you get pops and crackles. This rate must match the timing of the VPS at all times in order to get asynchronous audio working properly and smooth frame rates.. Asynchronous audio is where the game's audio is tied to the internal input/output rate of the internal frame buffer, and synchronized to the internal speed of the console being emulated.. Simply put...Fixed audio timing (synchronous audio timing) is not the same as synchronizing a game using audio (asynchronous audio). Maybe we should add controls to the DSP section that allow one to control the internal sample rate with a slider and allow one to control frame buffer sizes like the ones in current builds of SNES9x. While in the core properties add a function to disable "fixed audio timing" and options to enable (Sync Using Audio) like we see in the specific ROM settings in Project64.
The point to this mess of a post I make is this....The use of Asynchronous audio is important to accuracy because that is how discs made using Constant Angular Velocity must function in order to read any data off the disc. All CD players use a sort of buffer between what is being read and what is being put out for us to hear. The CAV standard merely reads data inconsistently. Because Asynchronous audio is inconsistent to the buffering of audio data and synchronized to the input rate, logic suggests that the two act the same. We physically see the disc being spun at a constant rate using CAV, but the data is read from the disc in an inconsistent manner. Similarly, with Asynchronous audio we physically hear consistent streaming of audio...but that audio written as input from the console is read inconsistently.