Mac core audio
The library uses the lowest Core Audio API available, thus indeed with things like AudioDeviceCreateIOProcID and AudioObjectAddPropertyListener. Nothing of this is finished, but things do actually work. How do other MacOS/X programs handle these issues?Ī time ago I played with a proof of concept playground audiomixer in C. the upstream device is producing samples at 95.99999kHz while the downstream device is consuming them at 96.000001kHz (or vice-versa), and that would eventually cause me to end up with either "not enough" or "too many" samples to feed the downstream device during a given rendering-callback, causing a glitch.Īny other gotchas they I haven't considered yet even if both devices are nominally set to 96kHz sample rate, I suspect it may actually be the case that e.g. Handling differences in the sample-clock rates of the two devices. I would need to pass audio from one callback to the other somehow, ideally without significantly increasing audio latency. Integrating separate AudioDeviceStart()-initiated callbacks from the two devices, which I suspect will not be called in any well-defined order, and might even be called concurrently with respect to each other(?). My question is, what is the recommended technique for doing this? I can require that both CoreAudio devices be settable to the same sample-rate, but even once I do that, I think I will have to handle various issues, such as: What I'd like to do is extend my application so that the user can choose his "input CoreAudio device" and his "output CoreAudio device" independently of each other, rather than being restricted to using only a single CoreAudio device that supplies both the input audio source and the output audio sink. However, my program currently has a limitation - it can only use a single CoreAudio device at a time. It works very well, and I'm quite happy with its performance.
![mac core audio mac core audio](https://cdn.osxdaily.com/wp-content/uploads/2015/07/mac-setup-pro-dj-music-producer.jpg)
![mac core audio mac core audio](https://images.idgesg.net/images/article/2019/04/2019-imac-08-100793209-large.3x2.jpg)
AudioDeviceAddIOProc, AudioDeviceStart, etc - not AudioUnits) to grab exclusive access to a user-specified CoreAudio device, set it to the desired sample rate (96kHz), and do its thing. This application uses the lower-level CoreAudio APIs (i.e. First, some background info: I'm writing a MacOS/X application that uses CoreAudio to receive an audio signal from a CoreAudio device's input stream, do some real-time processing on the audio, and then send it back to that CoreAudio device's output stream for the user to hear.