Implementing Core Audio as native ReactOs Audio API

Here you can discuss ReactOS related topics.

Moderator: Moderator Team

Post Reply
Tobi
Posts: 44
Joined: Wed Jun 18, 2008 12:29 am

Implementing Core Audio as native ReactOs Audio API

Post by Tobi »

Sounds strange...

The following thoughts came into my mind:

1) The problem with Windows has always been the high latency DirectSound Audio System, which makes the OS unsuitable for more advanced audio recording tasks by default. You always have to deal with ASIO workarounds.

2) In ReactOS, all Direct3D stuff is wrapped to OpenGL.

D3D is quite advanced, and so this 3D wrapping is intended to be temporary. But in audio, Core Audio is the superior API to my opinion. Many DJs use Macbooks for live performances due to the powerful Core Audio API already included in MacOS. No instable third party ASIO workarounds required...

So I had the following audio architecture idea for ReactOS:

[ external image ]

The main native audio API would be Core Audio incl. OpenAL, probably using Win NT ASIO driver extensions. Similar to the current ReactOS 3D graphics system, the old NT 5.2 sound APIs like DirectSound or the old wavemapper stuff would be wrapped...

I don't really know if an implementation like that would really be possible with existing NT ASIO drivers and DirectSound, but I think it would definitely be a sexy idea!

Of course there is no existing Windows audio software using Core Audio at the moment, but maybe such a combination would arouse the interest of commercial audio software manufacturers for ReactOS... ;)

Z98
Release Engineer
Posts: 3379
Joined: Tue May 02, 2006 8:16 pm
Contact:

Re: Implementing Core Audio as native ReactOs Audio API

Post by Z98 »

An API is not inherently high latency. It is very much the implementation behind the API that determines a platform's performance. In the case of NT, it may very well be the underlying scheduling and resource management that produces the high latency, at which point it won't matter what API one uses. If you want improvements in this field, you need to examine the underlying architecture to understand where the issues are and solve it there.

gigaherz
Posts: 92
Joined: Sat Jan 21, 2006 9:26 pm

Re: Implementing Core Audio as native ReactOs Audio API

Post by gigaherz »

A note on audio latency:

The reason windows APIs are high-latency is because they run through the mixer components. Software mixing is very inefficient to do at sample level, so it has to buffer some audio and do batch processing instead. This is where the most of the latency comes from.

There's alternative APIs in Windows, that are lower latency than DirectSound. At the lowest level, there's Kernel Streaming, which involves talking to the drivers directly. Alternatives would be ASIO, or the Vista+ WASAPI. ALL of the low-level APIs share one thing in common: they are exclusive. When one application takes the device, other applications can't use it.

Another reason why there's added latency, is because if you work at low level, you can get away with using tiny buffers (128 samples or even less, per buffer), while the mixer driver used by DirectSound works at blocks of 512 (1024 in Vista+).

If your sound card doesn't handle ASIO, there's a wrapper driver called ASIO4ALL that uses Kernel Streaming internally. It's not as low-latency as a raw ASIO driver, but it can get quite close with good hardware.

EDIT: Also, I just noticed you put "wavemapper" on top of DirectSound. Assuming you mean the waveIn/Out set of APIs (winmm, IIRC), this is not accurate. Both waveIn/Out and DirectSound work on top of the kernel mixer driver (or WASAPI in Vista+).

Post Reply

Who is online

Users browsing this forum: Ahrefs [Bot], Google [Bot], ReactosCurious and 0 guests