expo-av
allows you to implement audio playback and recording in your app.Android Device | Android Emulator | iOS Device | iOS Simulator | Web |
---|---|---|---|---|
→
expo install expo-av
If you're installing this in a bare React Native app, you should also follow these additional installation instructions.
import * as React from 'react'; import { Text, View, StyleSheet, Button } from 'react-native'; import { Audio } from 'expo-av'; export default function App() { const [sound, setSound] = React.useState(); async function playSound() { console.log('Loading Sound'); const { sound } = await Audio.Sound.createAsync( require('./assets/Hello.mp3') ); setSound(sound); console.log('Playing Sound'); await sound.playAsync(); } React.useEffect(() => { return sound ? () => { console.log('Unloading Sound'); sound.unloadAsync(); } : undefined; }, [sound]); return ( <View style={styles.container}> <Button title="Play Sound" onPress={playSound} /> </View> ); } %%placeholder-start%%const styles = StyleSheet.create({ ... }); %%placeholder-end%%const styles = StyleSheet.create({ container: { flex: 1, justifyContent: 'center', backgroundColor: '#ecf0f1', padding: 10, }, });
import * as React from 'react'; import { Text, View, StyleSheet, Button } from 'react-native'; import { Audio } from 'expo-av'; export default function App() { const [recording, setRecording] = React.useState(); async function startRecording() { try { console.log('Requesting permissions..'); await Audio.requestPermissionsAsync(); await Audio.setAudioModeAsync({ allowsRecordingIOS: true, playsInSilentModeIOS: true, }); console.log('Starting recording..'); const { recording } = await Audio.Recording.createAsync( Audio.RECORDING_OPTIONS_PRESET_HIGH_QUALITY ); setRecording(recording); console.log('Recording started'); } catch (err) { console.error('Failed to start recording', err); } } async function stopRecording() { console.log('Stopping recording..'); setRecording(undefined); await recording.stopAndUnloadAsync(); const uri = recording.getURI(); console.log('Recording stopped and stored at', uri); } return ( <View style={styles.container}> <Button title={recording ? 'Stop Recording' : 'Start Recording'} onPress={recording ? stopRecording : startRecording} /> </View> ); } %%placeholder-start%%const styles = StyleSheet.create({ ... }); %%placeholder-end%%const styles = StyleSheet.create({ container: { flex: 1, justifyContent: 'center', backgroundColor: '#ecf0f1', padding: 10, }, });
import { Audio } from 'expo-av';
true
enables Audio, and false
disables it.Promise
that will reject if audio playback could not be enabled for the device.playsInSilentModeIOS
: a boolean selecting if your experience's audio should play in silent mode on iOS. This value defaults to false
.allowsRecordingIOS
: a boolean selecting if recording is enabled on iOS. This value defaults to false
. NOTE: when this flag is set to true
, playback may be routed to the phone receiver instead of to the speaker.staysActiveInBackground
: a boolean selecting if the audio session (playback or recording) should stay active even when the app goes into background. This value defaults to false
. This is not available in Expo Go for iOS, it will only work in standalone apps. To enable it for standalone apps, follow the instructions below to add UIBackgroundModes
to your app configuration.interruptionModeIOS
: an enum selecting how your experience's audio should interact with the audio from other apps on iOS:INTERRUPTION_MODE_IOS_MIX_WITH_OTHERS
: This is the default option. If this option is set, your experience's audio is mixed with audio playing in background apps.INTERRUPTION_MODE_IOS_DO_NOT_MIX
: If this option is set, your experience's audio interrupts audio from other apps.INTERRUPTION_MODE_IOS_DUCK_OTHERS
: If this option is set, your experience's audio lowers the volume ("ducks") of audio from other apps while your audio plays.shouldDuckAndroid
: a boolean selecting if your experience's audio should automatically be lowered in volume ("duck") if audio from another app interrupts your experience. This value defaults to true
. If false
, audio from other apps will pause your audio.interruptionModeAndroid
: an enum selecting how your experience's audio should interact with the audio from other apps on Android:INTERRUPTION_MODE_ANDROID_DO_NOT_MIX
: If this option is set, your experience's audio interrupts audio from other apps.INTERRUPTION_MODE_ANDROID_DUCK_OTHERS
: This is the default option. If this option is set, your experience's audio lowers the volume ("ducks") of audio from other apps while your audio plays.playThroughEarpieceAndroid
: a boolean selecting if the audio is routed to earpiece (on Android). This value defaults to false
.Promise
that will reject if the audio mode could not be enabled for the device. Note that these are the only legal AudioMode combinations of (playsInSilentModeIOS
, allowsRecordingIOS
, staysActiveInBackground
, interruptionModeIOS
), and any other will result in promise rejection:false, false, false, INTERRUPTION_MODE_IOS_DO_NOT_MIX
false, false, false, INTERRUPTION_MODE_IOS_MIX_WITH_OTHERS
true, true, true, INTERRUPTION_MODE_IOS_DO_NOT_MIX
true, true, true, INTERRUPTION_MODE_IOS_DUCK_OTHERS
true, true, true, INTERRUPTION_MODE_IOS_MIX_WITH_OTHERS
true, true, false, INTERRUPTION_MODE_IOS_DO_NOT_MIX
true, true, false, INTERRUPTION_MODE_IOS_DUCK_OTHERS
true, true, false, INTERRUPTION_MODE_IOS_MIX_WITH_OTHERS
true, false, true, INTERRUPTION_MODE_IOS_DO_NOT_MIX
true, false, true, INTERRUPTION_MODE_IOS_DUCK_OTHERS
true, false, true, INTERRUPTION_MODE_IOS_MIX_WITH_OTHERS
true, false, false, INTERRUPTION_MODE_IOS_DO_NOT_MIX
true, false, false, INTERRUPTION_MODE_IOS_DUCK_OTHERS
true, false, false, INTERRUPTION_MODE_IOS_MIX_WITH_OTHERS
UIBackgroundModes
array in your Info.plist file. In standalone apps this array is empty by default, so in order to use background features you will need to add appropriate keys to your app.json configuration.{ "expo": { ... "ios": { ... "infoPlist": { ... "UIBackgroundModes": [ "audio" ] } } } }
Audio.Sound
.const sound = new Audio.Sound(); try { await sound.loadAsync(require('./assets/sounds/hello.mp3')); await sound.playAsync(); // Your sound is playing! // Don't forget to unload the sound from memory // when you are done using the Sound object await sound.unloadAsync(); } catch (error) { // An error occurred! }
Audio.Sound.createAsync(source, initialStatus = {}, onPlaybackStatusUpdate = null, downloadFirst = true)
initialStatus
, onPlaybackStatusUpdate
, and downloadFirst
.const { sound } = await Audio.Sound.createAsync( source, initialStatus, onPlaybackStatusUpdate, downloadFirst ); // Which is equivalent to the following: const sound = new Audio.Sound(); sound.setOnPlaybackStatusUpdate(onPlaybackStatusUpdate); await sound.loadAsync(source, initialStatus, downloadFirst);
{ uri: string, headers?: { [string]: string }, overrideFileExtensionAndroid?: string }
with a network URL pointing to a media file on the web, an optional headers object passed in a network request to the uri
and an optional Android-specific overrideFileExtensionAndroid
string overriding extension inferred from the URL.
The overrideFileExtensionAndroid
property may come in handy if the player receives an URL like example.com/play
which redirects to example.com/player.m3u8
. Setting this property to m3u8
would allow the Android player to properly infer the content type of the media and use proper media file reader.require('path/to/file')
for an audio file asset in the source code directory.Asset
object for an audio file asset.PlaybackStatusToSet
of the sound, whose values will override the default initial playback status. This value defaults to {}
if no parameter is passed. See the AV documentation for details on PlaybackStatusToSet
and the default initial playback status.PlaybackStatus
. This value defaults to null
if no parameter is passed. See the AV documentation for details on the functionality provided by onPlaybackStatusUpdate
true
. Note that at the moment, this will only work for source
s of the form require('path/to/file')
or Asset
objects.Promise
that is rejected if creation failed, or fulfilled with the following dictionary if creation succeeded:sound
: the newly created and loaded Sound
object.status
: the PlaybackStatus
of the Sound
object. See the AV documentation for further information.try { const { sound: soundObject, status } = await Audio.Sound.createAsync( require('./assets/sounds/hello.mp3'), { shouldPlay: true } ); // Your sound is playing! } catch (error) { // An error occurred! }
soundObject
reference, the following API is provided:soundObject.setOnMetadataUpdate(onMetadataUpdate)
[iOS only]
Sets a function to be called whenever the metadata (of type AVMetadata
, details below) of the sound object, if any, changes.AVMetadata
(described below) as a parameter.soundObject.setOnAudioSampleReceived(callback)
Sets a function to be called during playback, receiving the audio sample as parameter.AudioSample
(described below) as a parameter.Audio.Sound
is the same as the imperative playback API for Video
-- see the AV documentation for further information:soundObject.loadAsync(source, initialStatus = {}, downloadFirst = true)
soundObject.unloadAsync()
soundObject.getStatusAsync()
soundObject.setOnPlaybackStatusUpdate(onPlaybackStatusUpdate)
soundObject.setStatusAsync(statusToSet)
soundObject.playAsync()
soundObject.replayAsync()
soundObject.pauseAsync()
soundObject.stopAsync()
soundObject.setPositionAsync(millis)
soundObject.setRateAsync(value, shouldCorrectPitch, pitchCorrectionQuality)
soundObject.setVolumeAsync(value)
soundObject.setIsMutedAsync(value)
soundObject.setIsLoopingAsync(value)
soundObject.setProgressUpdateIntervalAsync(millis)
onMetadataUpdate
function. It has the following keys:title
: a string with the title of the sound object. This key is optional.onAudioSampleReceived
function. Represents a single sample from an audio source. The sample contains all frames (PCM Buffer values) for each channel of the audio, so if the audio is stereo (interleaved), there will be two channels, one for left and one for right audio.channels
- an array representing the data from each channel in PCM Buffer format. Array elements are objects in the following format: { frames: number[] }
, where each frame is a number in PCM Buffer format (-1
to 1
range).timestamp
- a number representing the timestamp of the current sample in seconds, relative to the audio track's timeline.Known issue: When using theExoPlayer
Android implementation, the timestamp is always-1
.
Notes on web usage:
- A MediaRecorder issue on Chrome produces WebM files missing the duration metadata. See the open Chromium issue
- MediaRecorder encoding options and other configurations are inconsistent across browsers, utilising a Polyfill such as kbumsik/opus-media-recorder or ai/audio-recorder-polyfill in your application will improve your experience. Any options passed to
prepareToRecordAsync
will be passed directly to the MediaRecorder API and as such the polyfill.- Web browsers require sites to be served securely in order for them to listen to a mic. See MediaDevices#getUserMedia Security for more details.
prepareToRecordAsync
must be called in order to record audio. Once recording is finished, call stopAndUnloadAsync
. Note that only one recorder is allowed to exist in the state between prepareToRecordAsync
and stopAndUnloadAsync
at any given time.Permissions
module for more details. Additionally, audio recording is not supported in the iOS Simulator.Audio.Recording
.const recording = new Audio.Recording(); try { await recording.prepareToRecordAsync(Audio.RECORDING_OPTIONS_PRESET_HIGH_QUALITY); await recording.startAsync(); // You are now recording! } catch (error) { // An error occurred! }
Audio.Recording.createAsync(options, onRecordingStatusUpdate = null, progressUpdateIntervalMillis = null)
onRecordingStatusUpdate
and progressUpdateIntervalMillis
.const { recording, status } = await Audio.Recording.createAsync( options, onRecordingStatusUpdate, progressUpdateIntervalMillis ); // Which is equivalent to the following: const recording = new Audio.Recording(); await recording.prepareToRecordAsync(options); recording.setOnRecordingStatusUpdate(onRecordingStatusUpdate); await recording.startAsync();
Audio.RECORDING_OPTIONS_PRESET_LOW_QUALITY
. See below for details on RecordingOptions
.status
(a dictionary, described in getStatusAsync
).onRecordingStatusUpdate
. This value defaults to 500 milliseconds.Promise
that is rejected if creation failed, or fulfilled with the following dictionary if creation succeeded:recording
: the newly created and started Recording
object.status
: the RecordingStatus
of the Recording
object. See the AV documentation for further information.try { const { recording: recordingObject, status } = await Audio.Recording.createAsync( Audio.RECORDING_OPTIONS_PRESET_HIGH_QUALITY ); // You are now recording! } catch (error) { // An error occurred! }
recordingInstance.getStatusAsync()
status
of the Recording
.Promise
that is resolved with the status
of the Recording
: a dictionary with the following key-value pairs.prepareToRecordAsync
is called, the status
will be as follows:canRecord
: a boolean set to false
.isDoneRecording
: a boolean set to false
.prepareToRecordAsync()
is called, but before stopAndUnloadAsync()
is called, the status
will be as follows:canRecord
: a boolean set to true
.isRecording
: a boolean describing if the Recording
is currently recording.durationMillis
: the current duration of the recorded audio.metering
: a number that's the most recent reading of the loudness in dB. The value ranges from –160 dBFS, indicating minimum power, to 0 dBFS, indicating maximum power. Present or not based on Recording options. See RecordingOptions
for more information.mediaServicesDidReset
: (iOS only) a boolean indictating whether media services were reset during recording. This may occur if the active input ceases to be available during recording (example: airpods are the active input and they run out of batteries during recording.)stopAndUnloadAsync()
is called, the status
will be as follows:canRecord
: a boolean set to false
.isDoneRecording
: a boolean set to true
.durationMillis
: the final duration of the recorded audio.recordingInstance.setOnRecordingStatusUpdate(onRecordingStatusUpdate)
status
of the Recording
. See getStatusAsync()
for details on status
.onRecordingStatusUpdate
will be called when another call to the API for this recording completes (such as prepareToRecordAsync()
, startAsync()
, getStatusAsync()
, or stopAndUnloadAsync()
), and will also be called at regular intervals while the recording can record. Call setProgressUpdateInterval()
to modify the interval with which onRecordingStatusUpdate
is called while the recording can record.status
(a dictionary, described in getStatusAsync
).recordingInstance.setProgressUpdateInterval(millis)
onRecordingStatusUpdate
is called while the recording can record. See setOnRecordingStatusUpdate
for details. This value defaults to 500 milliseconds.onRecordingStatusUpdate
.recordingInstance.prepareToRecordAsync(options)
startAsync()
. This method can only be called if the Recording
instance has never yet been prepared.prepareToRecordAsync()
, the recorder will be created with options Audio.RECORDING_OPTIONS_PRESET_LOW_QUALITY
. See below for details on RecordingOptions
.Promise
that is fulfilled when the recorder is loaded and prepared, or rejects if this failed. If another Recording
exists in your experience that is currently prepared to record, the Promise
will reject. If the RecordingOptions
provided are invalid, the Promise
will also reject. The promise is resolved with the status
of the recording (see getStatusAsync()
for details).recordingInstance.getAvailableInputs()
Recording
has been prepared.Promise
that is fulfilled with an array of RecordingInput
objects with name
, uid
and type
params.recordingInstance.getCurrentInput()
Recording
has been prepared.Promise
that is fulfilled with a RecordingInput
objects with name
, uid
and type
params.recordingInstance.setInput(inputUid)
RecordingInput
.Promise
that is resolved if successful or rejected if not.recordingInstance.startAsync()
Recording
has been prepared.Promise
that is fulfilled when recording has begun, or rejects if recording could not start. The promise is resolved with the status
of the recording (see getStatusAsync()
for details).recordingInstance.pauseAsync()
Recording
has been prepared.Promise
that is fulfilled when recording has paused, or rejects if recording could not be paused. If the Android API version is less than 24, the Promise
will reject. The promise is resolved with the status
of the recording (see getStatusAsync()
for details).recordingInstance.stopAndUnloadAsync()
Recording
instance to an unprepared state, and another Recording
instance must be created in order to record again. This method can only be called if the Recording
has been prepared.E_AUDIO_NODATA
when called too soon after startAsync
and no audio data has been recorded yet. In that case the recorded file will be invalid and should be discarded.Promise
that is fulfilled when recording has stopped, or rejects if recording could not be stopped. The promise is resolved with the status
of the recording (see getStatusAsync()
for details).recordingInstance.getURI()
Recording
. Note that this will only succeed once the Recording
is prepared to record. On web, this will not return the URI until the recording is finished.string
with the local URI of the Recording
, or null
if the Recording
is not prepared to record (or, on Web, if the recording has not finished).recordingInstance.createNewLoadedSoundAsync()
Sound
object to play back the Recording
. Note that this will only succeed once the Recording
is done recording (once stopAndUnloadAsync()
has been called).PlaybackStatusToSet
of the sound, whose values will override the default initial playback status. This value defaults to {}
if no parameter is passed. See the AV documentation for details on PlaybackStatusToSet
and the default initial playback status.PlaybackStatus
. This value defaults to null
if no parameter is passed. See the AV documentation for details on the functionality provided by onPlaybackStatusUpdate
Promise
that is rejected if creation failed, or fulfilled with the following dictionary if creation succeeded:sound
: the newly created and loaded Sound
object.status
: the PlaybackStatus
of the Sound
object. See the AV documentation for further information.prepareToRecordAsync()
.Audio.RECORDING_OPTIONS_PRESET_HIGH_QUALITY
Audio.RECORDING_OPTIONS_PRESET_LOW_QUALITY
prepareToRecordAsync()
. You will have to test your custom options on iOS and Android to make sure it's working. In the future, we will enumerate all possible valid combinations, but at this time, our goal is to make the basic use-case easy (with presets) and the advanced use-case possible (by exposing all the functionality available in native). As always, feel free to ping us on the forums or Slack with any questions.isMeteringEnabled
: a boolean that determines whether audio level information will be part of the status object under the "metering" key.keepAudioActiveHint
: a boolean that hints to keep the audio active after prepareToRecordAsync
completes. Setting this value can improve the speed at which the recording starts. Only set this value to true
when you call startAsync
immediately after prepareToRecordAsync
. This value is automatically set when using Audio.recording.createAsync()
.android
: a dictionary of key-value pairs for the Android platform. This key is required.extension
: the desired file extension. This key is required. Example valid values are .3gp
and .m4a
. For more information, see the Android docs for supported output formats.outputFormat
: the desired file format. This key is required. See the next section for an enumeration of all valid values of outputFormat
.audioEncoder
: the desired audio encoder. This key is required. See the next section for an enumeration of all valid values of audioEncoder
.sampleRate
: the desired sample rate. This key is optional. An example valid value is 44100
.numberOfChannels
: the desired number of channels. This key is optional. Example valid values are 1
and 2
.prepareToRecordAsync()
may perform additional checks on the parameter to make sure whether the specified number of audio channels are applicable.bitRate
: the desired bit rate. This key is optional. An example valid value is 128000
.prepareToRecordAsync()
may perform additional checks on the parameter to make sure whether the specified bit rate is applicable, and sometimes the passed bitRate will be clipped internally to ensure the audio recording can proceed smoothly based on the capabilities of the platform.maxFileSize
: the desired maximum file size in bytes, after which the recording will stop (but stopAndUnloadAsync()
must still be called after this point). This key is optional. An example valid value is 65536
.ios
: a dictionary of key-value pairs for the iOS platformextension
: the desired file extension. This key is required. An example valid value is .caf
.outputFormat
: the desired file format. This key is optional. See the next section for an enumeration of all valid values of outputFormat
.audioQuality
: the desired audio quality. This key is required. See the next section for an enumeration of all valid values of audioQuality
.sampleRate
: the desired sample rate. This key is required. An example valid value is 44100
.numberOfChannels
: the desired number of channels. This key is required. Example valid values are 1
and 2
.bitRate
: the desired bit rate. This key is required. An example valid value is 128000
.bitRateStrategy
: the desired bit rate strategy. This key is optional. See the next section for an enumeration of all valid values of bitRateStrategy
.bitDepthHint
: the desired bit depth hint. This key is optional. An example valid value is 16
.linearPCMBitDepth
: the desired PCM bit depth. This key is optional. An example valid value is 16
.linearPCMIsBigEndian
: a boolean describing if the PCM data should be formatted in big endian. This key is optional.linearPCMIsFloat
: a boolean describing if the PCM data should be encoded in floating point or integral values. This key is optional.RecordingOptions
keys.Note Not all of the iOS formats included in this list of constants are currently supported by iOS, in spite of appearing in the Apple source code. For an accurate list of formats supported by iOS, see Core Audio Codecs and iPhone Audio File Formats.
android
:outputFormat
:Audio.RECORDING_OPTION_ANDROID_OUTPUT_FORMAT_DEFAULT
Audio.RECORDING_OPTION_ANDROID_OUTPUT_FORMAT_THREE_GPP
Audio.RECORDING_OPTION_ANDROID_OUTPUT_FORMAT_MPEG_4
Audio.RECORDING_OPTION_ANDROID_OUTPUT_FORMAT_AMR_NB
Audio.RECORDING_OPTION_ANDROID_OUTPUT_FORMAT_AMR_WB
Audio.RECORDING_OPTION_ANDROID_OUTPUT_FORMAT_AAC_ADIF
Audio.RECORDING_OPTION_ANDROID_OUTPUT_FORMAT_AAC_ADTS
Audio.RECORDING_OPTION_ANDROID_OUTPUT_FORMAT_RTP_AVP
Audio.RECORDING_OPTION_ANDROID_OUTPUT_FORMAT_MPEG2TS
Audio.RECORDING_OPTION_ANDROID_OUTPUT_FORMAT_WEBM
audioEncoder
:Audio.RECORDING_OPTION_ANDROID_AUDIO_ENCODER_DEFAULT
Audio.RECORDING_OPTION_ANDROID_AUDIO_ENCODER_AMR_NB
Audio.RECORDING_OPTION_ANDROID_AUDIO_ENCODER_AMR_WB
Audio.RECORDING_OPTION_ANDROID_AUDIO_ENCODER_AAC
Audio.RECORDING_OPTION_ANDROID_AUDIO_ENCODER_HE_AAC
Audio.RECORDING_OPTION_ANDROID_AUDIO_ENCODER_AAC_ELD
ios
:outputFormat
:Audio.RECORDING_OPTION_IOS_OUTPUT_FORMAT_LINEARPCM
Audio.RECORDING_OPTION_IOS_OUTPUT_FORMAT_AC3
Audio.RECORDING_OPTION_IOS_OUTPUT_FORMAT_60958AC3
Audio.RECORDING_OPTION_IOS_OUTPUT_FORMAT_APPLEIMA4
Audio.RECORDING_OPTION_IOS_OUTPUT_FORMAT_MPEG4AAC
Audio.RECORDING_OPTION_IOS_OUTPUT_FORMAT_MPEG4CELP
Audio.RECORDING_OPTION_IOS_OUTPUT_FORMAT_MPEG4HVXC
Audio.RECORDING_OPTION_IOS_OUTPUT_FORMAT_MPEG4TWINVQ
Audio.RECORDING_OPTION_IOS_OUTPUT_FORMAT_MACE3
Audio.RECORDING_OPTION_IOS_OUTPUT_FORMAT_MACE6
Audio.RECORDING_OPTION_IOS_OUTPUT_FORMAT_ULAW
Audio.RECORDING_OPTION_IOS_OUTPUT_FORMAT_ALAW
Audio.RECORDING_OPTION_IOS_OUTPUT_FORMAT_QDESIGN
Audio.RECORDING_OPTION_IOS_OUTPUT_FORMAT_QDESIGN2
Audio.RECORDING_OPTION_IOS_OUTPUT_FORMAT_QUALCOMM
Audio.RECORDING_OPTION_IOS_OUTPUT_FORMAT_MPEGLAYER1
Audio.RECORDING_OPTION_IOS_OUTPUT_FORMAT_MPEGLAYER2
Audio.RECORDING_OPTION_IOS_OUTPUT_FORMAT_MPEGLAYER3
Audio.RECORDING_OPTION_IOS_OUTPUT_FORMAT_APPLELOSSLESS
Audio.RECORDING_OPTION_IOS_OUTPUT_FORMAT_MPEG4AAC_HE
Audio.RECORDING_OPTION_IOS_OUTPUT_FORMAT_MPEG4AAC_LD
Audio.RECORDING_OPTION_IOS_OUTPUT_FORMAT_MPEG4AAC_ELD
Audio.RECORDING_OPTION_IOS_OUTPUT_FORMAT_MPEG4AAC_ELD_SBR
Audio.RECORDING_OPTION_IOS_OUTPUT_FORMAT_MPEG4AAC_ELD_V2
Audio.RECORDING_OPTION_IOS_OUTPUT_FORMAT_MPEG4AAC_HE_V2
Audio.RECORDING_OPTION_IOS_OUTPUT_FORMAT_MPEG4AAC_SPATIAL
Audio.RECORDING_OPTION_IOS_OUTPUT_FORMAT_AMR
Audio.RECORDING_OPTION_IOS_OUTPUT_FORMAT_AMR_WB
Audio.RECORDING_OPTION_IOS_OUTPUT_FORMAT_AUDIBLE
Audio.RECORDING_OPTION_IOS_OUTPUT_FORMAT_ILBC
Audio.RECORDING_OPTION_IOS_OUTPUT_FORMAT_DVIINTELIMA
Audio.RECORDING_OPTION_IOS_OUTPUT_FORMAT_MICROSOFTGSM
Audio.RECORDING_OPTION_IOS_OUTPUT_FORMAT_AES3
Audio.RECORDING_OPTION_IOS_OUTPUT_FORMAT_ENHANCEDAC3
audioQuality
:Audio.RECORDING_OPTION_IOS_AUDIO_QUALITY_MIN
Audio.RECORDING_OPTION_IOS_AUDIO_QUALITY_LOW
Audio.RECORDING_OPTION_IOS_AUDIO_QUALITY_MEDIUM
Audio.RECORDING_OPTION_IOS_AUDIO_QUALITY_HIGH
Audio.RECORDING_OPTION_IOS_AUDIO_QUALITY_MAX
bitRateStrategy
:Audio.RECORDING_OPTION_IOS_BIT_RATE_STRATEGY_CONSTANT
Audio.RECORDING_OPTION_IOS_BIT_RATE_STRATEGY_LONG_TERM_AVERAGE
Audio.RECORDING_OPTION_IOS_BIT_RATE_STRATEGY_VARIABLE_CONSTRAINED
Audio.RECORDING_OPTION_IOS_BIT_RATE_STRATEGY_VARIABLE
RecordingOptions
, as implemented in the Audio SDK:export const RECORDING_OPTIONS_PRESET_HIGH_QUALITY: RecordingOptions = { isMeteringEnabled: true, android: { extension: '.m4a', outputFormat: RECORDING_OPTION_ANDROID_OUTPUT_FORMAT_MPEG_4, audioEncoder: RECORDING_OPTION_ANDROID_AUDIO_ENCODER_AAC, sampleRate: 44100, numberOfChannels: 2, bitRate: 128000, }, ios: { extension: '.caf', audioQuality: RECORDING_OPTION_IOS_AUDIO_QUALITY_MAX, sampleRate: 44100, numberOfChannels: 2, bitRate: 128000, linearPCMBitDepth: 16, linearPCMIsBigEndian: false, linearPCMIsFloat: false, }, }; export const RECORDING_OPTIONS_PRESET_LOW_QUALITY: RecordingOptions = { isMeteringEnabled: true, android: { extension: '.3gp', outputFormat: RECORDING_OPTION_ANDROID_OUTPUT_FORMAT_THREE_GPP, audioEncoder: RECORDING_OPTION_ANDROID_AUDIO_ENCODER_AMR_NB, sampleRate: 44100, numberOfChannels: 2, bitRate: 128000, }, ios: { extension: '.caf', audioQuality: RECORDING_OPTION_IOS_AUDIO_QUALITY_MIN, sampleRate: 44100, numberOfChannels: 2, bitRate: 128000, linearPCMBitDepth: 16, linearPCMIsBigEndian: false, linearPCMIsFloat: false, }, };