What’s New for Core Audio APIs in Windows 7
The Core Audio APIs were introduced in WindowsВ Vista, which provided a new set of user-mode audio components that a client application can use to render or capture audio streams with improved audio capabilities. For a general overview of this API set, see About the Windows Core Audio APIs.
The Core Audio APIs have been improved in WindowsВ 7. The following table summarizes the new features and the improvements to the Core Audio APIs:
Feature | Description |
---|---|
Generic improvements | The following features have been improved in WindowsВ 7:
|
Communication device (New) | In this release a new device type has been added to the Sounds control panel: Communications device. This device is used primarily for communications, that is, to place or receive phone calls on the computer. A communication application can use Core Audio components to get a reference to the endpoint of the default communication device and render audio streams for communication purposes. The operating system considers the stream opened on a communication device to be a communication stream. The WASAPI operations on a communication stream are similar to any other audio stream. For more information, see Working with Device Roles. |
Stream attenuation or audio ducking (New) | Automatic ducking or Stream Attenuation is a new feature in WindowsВ 7 that is intended for VoIP and Unified Communication applications. By default, the operating system reduces the intensity of an audio stream when a communication stream, such as a phone call, is received on the communication device through the computer. The volume options are set by the user in the Sound control panel. New APIs have been added in the Windows SDK that enable applications to replace the default ducking behavior. For more information about implementing a custom ducking feature, see Providing a Custom Ducking Behavior. |
Stream routing (New) | In WindowsВ 7, the Core Audio APIs have been improved to transfer an audio stream seamlessly from an existing device to a new default audio endpoint. High-level audio API sets that use Core Audio APIs, such as Media Foundation, DirectSound, and WAVE APIs, implement the stream routing feature. Media applications that use these API sets to play or capture a stream use the default implementation and do not have to modify the application. However, if your media application uses Core Audio APIs directly, the application needs to provide the stream routing implementation. To do so, the application must handle new events that have been added that notify a WASAPI client when the default device is connected or removed. For more information about this feature, see Stream Routing. |
Protected User Mode Audio (PUMA) (Improved) | PUMA has been updated for WindowsВ 7 to provide the following features:
For more information about the improvements, see Protected User Mode Audio (PUMA). |
The WAVEFORMATEXTENSIBLE structure has been extended to the WAVEFORMATEXTENSIBLE_IEC61937 structure (New) | In WindowsВ 7, a new structure has been added to support IEC 61937 transmissions. WAVEFORMATEXTENSIBLE_IEC61937 extends the WAVEFORMATEXTENSIBLE structure to store two sets of audio stream characteristics: the encoded audio format before transmission and characteristics of the audio stream after it has been decoded. The new structure explicitly specifies the effective number of channels, sample size, and data rate of a non-PCM format. With this information, an application can infer the quality level of the non-PCM stream after it is decompressed and played. For more information, see Representing Formats for IEC 61937 Transmissions. |
IAudioClient::Initialize (Improved) | The IAudioClient::Initialize method has been improved to indicate specific errors that might occur while opening an audio stream. The new error codes are:
For more information about these errors, see the Return Value section in IAudioClient::Initialize. |
IAudioCaptureClient::GetBuffer and IAudioRenderClient::GetBuffer (Improved) | IAudioCaptureClient::GetBuffer and IAudioRenderClient::GetBuffer methods have been improved to return the AUDCLNT_E_BUFFER_ERROR error code that indicates that the endpoint buffer in the exclusive mode was not retrieved. For more information, see Remarks in IAudioCaptureClient::GetBuffer and IAudioRenderClient::GetBuffer. |
Jack detection capability (Improved) | A new interface in WindowsВ 7, IKsJackDescription2, extends IKsJackDescription. By using the new interface, the audio stack or an application can get additional jack information. This includes the jack’s detection capability and whether the format of the device has changed dynamically. |
Windows Samples (New) | New samples have been added to the Windows SDK that demonstrate the use of the Core Audio APIs. For more information, see SDK Samples That Use the Core Audio APIs. |
Major New Interfaces
The following interfaces are new for WindowsВ 7:
Core Audio Interfaces
This programming reference for the Core Audio SDK includes the following interfaces:
MMDevice API
The Windows Multimedia Device (MMDevice) API enables audio clients to discover audio endpoint devices, determine their capabilities, and create driver instances for those devices.Header file Mmdeviceapi.h defines the interfaces in the MMDevice API. For more information, see About MMDevice API.
The following table lists the MMDevice interfaces available with the Core Audio SDK for Windows Vista.
Interface | Description |
---|---|
IMMDevice | Represents an audio device. |
IMMDeviceCollection | Represents a collection of audio devices. |
IMMDeviceEnumerator | Provides methods for enumerating audio devices. |
IMMEndpoint | Represents an audio endpoint device. |
IMMNotificationClient | Provides notifications when an audio endpoint device is added or removed, when the state or properties of a device change, or when there is a change in the default role assigned to a device. |
WASAPI
The Windows Audio Session API (WASAPI) enables client applications to manage the flow of audio data between the application and an audio endpoint device. Header files Audioclient.h and Audiopolicy.h define the WASAPI interfaces. For more information, see About WASAPI.
The following table lists the WASAPI interfaces available with the Core Audio SDK for Windows Vista and later.
Interface | Description |
---|---|
IActivateAudioInterfaceAsyncOperation | Represents an asynchronous operation activating a WASAPI interface and provides a method to retrieve the results of the activation. Applies beginning with Windows 8. |
IActivateAudioInterfaceCompletionHandler | Provides a callback to indicate that activation of a WASAPI interface is complete. Applies beginning with Windows 8. |
IAudioCaptureClient | Enables a client to read input data from a capture endpoint buffer. |
IAudioClient | Enables a client to create and initialize an audio stream between an audio application and the audio engine or the hardware buffer of an audio endpoint device. |
IAudioClock | Enables a client to monitor a stream’s data rate and the current position in the stream. |
IAudioClock2 | Enables a client to get the current device position. |
IAudioClockAdjustment | Enables a client to set the sample rate of a stream. |
IAudioRenderClient | Enables a client to write output data to a rendering endpoint buffer. |
IAudioSessionControl | Enables a client to configure the control parameters for an audio session and to monitor events in the session. |
IAudioSessionControl2 | Enables a client to get information about the audio session. |
IAudioSessionManager | Enables a client to access the session controls and volume controls for both cross-process and process-specific audio sessions. |
IAudioSessionManager2 | Manages all submixes including enumeration and notification of submixes. It also provides support for ducking notifications. |
IAudioSessionEnumerator | Enables a client to enumerate audio sessions. |
IAudioStreamVolume | Enables a client to control and monitor the volume levels for all of the channels in an audio stream. |
IChannelAudioVolume | Enables a client to control the volume levels for all of the channels in the audio session that the stream belongs to. |
ISimpleAudioVolume | Enables a client to control the master volume level of an audio session. |
IAudioSessionEvents | Provides notifications of session-related events such as changes in the volume level, display name, and session state. |
IAudioSessionNotification | Sends notifications when session changes occur. |
IAudioVolumeDuckNotification | Sends notifications about pending system ducking changes. |
DeviceTopology API
The DeviceTopology API provides client applications with the ability to traverse the functional hardware topologies of audio rendering and capture devices. Header file Devicetopology.h defines the interfaces in the DeviceTopology API. For more information, see Device Topologies and DeviceTopology API.
The following table lists the DeviceTopology interfaces available with the Core Audio SDK for Windows Vista and later.
Interface | Description |
---|---|
IAudioAutoGainControl | Provides access to a hardware automatic gain control (AGC). |
IAudioBass | Provides access to a hardware bass-level control. |
IAudioChannelConfig | Provides access to a hardware channel-configuration control. |
IAudioInputSelector | Provides access to a hardware multiplexer control (input selector). |
IAudioLoudness | Provides access to a «loudness» compensation control. |
IAudioMidrange | Provides access to a hardware midrange-level control. |
IAudioMute | Provides access to a hardware mute control. |
IAudioOutputSelector | Provides access to a hardware demultiplexer control (output selector). |
IAudioPeakMeter | Provides access to a hardware peak-meter control. |
IAudioTreble | Provides access to a hardware treble-level control. |
IAudioVolumeLevel | Provides access to a hardware volume control. |
IConnector | Represents a point of connection between components. |
IControlInterface | Represents a control interface on a part (subunit or connector). |
IDeviceSpecificProperty | Represents a device-specific property of a connector or subunit. |
IDeviceTopology | Provides access to the topology of an audio device. |
IKsFormatSupport | Provides information about the audio data formats that are supported by a software-configured I/O connection (typically a DMA channel) between the audio device and system memory. |
IKsJackDescription | Provides information about the jacks or internal connectors that provide a physical connection between a device on an audio adapter and an external or internal endpoint device (for example, a microphone or CD player). |
IKsJackDescription2 | Provides convenient access to the KSPROPERTY_JACK_DESCRIPTION2 property of a connector to an endpoint device. |
IKsJackSinkInformation | Provides information about the jack sink if the jack is supported by the hardware. |
IPart | Represents a part (connector or subunit) of a device topology. |
IPartsList | Represents a list of parts (connectors and subunits). |
IPerChannelDbLevel | Represents a generic subunit control interface that provides per-channel control over the volume level, in decibels, of an audio stream or of a frequency band in an audio stream. |
ISubunit | Represents a hardware subunit (for example, a volume-level control) that lies in the data path between a client and an audio endpoint device. |
IControlChangeNotify | Provides notifications when the status of a part (connector or subunit) changes. |
EndpointVolume API
The EndpointVolume API enables specialized clients to control and monitor the volume levels of audio endpoint devices. Header file Endpointvolume.h defines the interfaces in the EndpointVolume API. For more information, see EndpointVolume API .
The following table lists the EndpointVolume interfaces available with the Core Audio SDK for Windows Vista.