This is documentation for the next SDK version. For up-to-date documentation, see the latest version (SDK 52).
A library that provides an API to implement audio playback and recording in apps.
This page documents an upcoming version of the Audio library. Expo Audio is currently in alpha and subject to breaking changes.
expo-audio
is a cross-platform audio library for accessing the native audio capabilities of the device.
Audio recording APIs are not available on tvOS (Apple TV).
Note that audio automatically stops if headphones/bluetooth audio devices are disconnected.
-
npx expo install expo-audio
If you are installing this in an existing React Native app, start by installing expo
in your project. Then, follow the additional instructions as mentioned by the library's README under "Installation in bare React Native projects" section.
You can configure expo-audio
using its built-in config plugin if you use config plugins in your project (EAS Build or npx expo run:[android|ios]
). The plugin allows you to configure various properties that cannot be set at runtime and require building a new app binary to take effect. If your app does not use EAS Build, then you'll need to manually configure the package.
{
"expo": {
"plugins": [
[
"expo-audio",
{
"microphonePermission": "Allow $(PRODUCT_NAME) to access your microphone."
}
]
]
}
}
Name | Default | Description |
---|---|---|
microphonePermission | "Allow $(PRODUCT_NAME) to access your microphone" | Only for: iOS A string to set the |
import { useEffect, useState } from 'react';
import { View, StyleSheet, Button } from 'react-native';
import { useAudioPlayer } from 'expo-audio';
const audioSource = require('./assets/Hello.mp3');
export default function App() {
const player = useAudioPlayer(audioSource);
return (
<View style={styles.container}>
<Button title="Play Sound" onPress={() => player.play()} />
</View>
);
}
const styles = StyleSheet.create({
container: {
flex: 1,
justifyContent: 'center',
backgroundColor: '#ecf0f1',
padding: 10,
},
});
import { useState } from 'react';
import { View, StyleSheet, Button } from 'react-native';
import { useAudioRecorder, RecordingOptions, AudioModule, RecordingPresets } from 'expo-audio';
export default function App() {
const audioRecorder = useAudioRecorder(RecordingPresets.HIGH_QUALITY);
const record = () => audioRecorder.record();
const stopRecording = async () => {
// The recording will be available on `audioRecorder.uri`.
await audioRecorder.stop();
};
useEffect(() => {
(async () => {
const status = await AudioModule.requestRecordingPermissionsAsync();
if (!status.granted) {
Alert.alert('Permission to access microphone was denied');
}
})();
}, []);
return (
<View style={styles.container}>
<Button
title={audioRecorder.isRecording ? 'Stop Recording' : 'Start Recording'}
onPress={audioRecorder.isRecording ? stopRecording : record}
/>
</View>
);
}
const styles = StyleSheet.create({
container: {
flex: 1,
justifyContent: 'center',
backgroundColor: '#ecf0f1',
padding: 10,
},
});
On iOS, audio playback and recording in background is only available in standalone apps, and it requires some extra configuration.
On iOS, each background feature requires a special key in UIBackgroundModes
array in your Info.plist file.
In standalone apps this array is empty by default, so to use background features you will need to add appropriate keys to your app.json configuration.
See an example of app.json that enables audio playback in background:
{
"expo": {
...
"ios": {
...
"infoPlist": {
...
"UIBackgroundModes": [
"audio"
]
}
}
}
}
In most cases, the useAudioPlayer
hook should be used to create a AudioPlayer
instance. It manages the player's lifecycle and ensures that it is properly disposed of when the component is unmounted. However, in some advanced use cases, it might be necessary to create a AudioPlayer
that does not get automatically destroyed when the component is unmounted.
In those cases, the AudioPlayer
can be created using the createAudioPlayer
function. You need be aware of the risks that come with this approach, as it is your responsibility to call the release()
method when the player is no longer needed. If not handled properly, this approach may lead to memory leaks.
import { createAudioPlayer } from 'expo-audio';
const player = createAudioPlayer(audioSource);
prepareToRecordAsync
will be passed directly to the MediaRecorder API and as such the polyfill.getUserMedia()
security for more details.import { useAudioPlayer, useAudioRecorder } from 'expo-audio';
Type: Record<string, RecordingOptions>
Constant which contains definitions of the two preset examples of RecordingOptions
, as implemented in the Audio SDK.
HIGH_QUALITY
RecordingPresets.HIGH_QUALITY = {
extension: '.m4a',
sampleRate: 44100,
numberOfChannels: 2,
bitRate: 128000,
android: {
outputFormat: 'mpeg4',
audioEncoder: 'aac',
},
ios: {
outputFormat: IOSOutputFormat.MPEG4AAC,
audioQuality: AudioQuality.MAX,
linearPCMBitDepth: 16,
linearPCMIsBigEndian: false,
linearPCMIsFloat: false,
},
web: {
mimeType: 'audio/webm',
bitsPerSecond: 128000,
},
};
LOW_QUALITY
RecordingPresets.LOW_QUALITY = {
extension: '.m4a',
sampleRate: 44100,
numberOfChannels: 2,
bitRate: 64000,
android: {
extension: '.3gp',
outputFormat: '3gp',
audioEncoder: 'amr_nb',
},
ios: {
audioQuality: AudioQuality.MIN,
outputFormat: IOSOutputFormat.MPEG4AAC,
linearPCMBitDepth: 16,
linearPCMIsBigEndian: false,
linearPCMIsFloat: false,
},
web: {
mimeType: 'audio/webm',
bitsPerSecond: 128000,
},
};
Parameter | Type |
---|---|
options | RecordingOptions |
statusListener(optional) | (status: RecordingStatus) => void |
AudioRecorder
Type: Class extends SharedObject<AudioEvents>
AudioPlayer Properties
boolean
Boolean value indicating whether audio sampling is supported on the platform.
boolean
A boolean describing if we are correcting the pitch for a changed rate.
AudioPlayer Methods
Parameter | Type | Description |
---|---|---|
seconds | number | The number of seconds to seek by. |
Seeks the playback by the given number of seconds.
Promise<void>
Parameter | Type | Description |
---|---|---|
rate | number | The playback rate of the audio. |
pitchCorrectionQuality(optional) | PitchCorrectionQuality | The quality of the pitch correction. |
Sets the current playback rate of the audio.
void
Type: Class extends SharedObject<RecordingEvents>
AudioRecorder Properties
AudioRecorder Methods
Returns a list of available recording inputs. This method can only be called if the Recording
has been prepared.
RecordingInput[]
A Promise
that is fulfilled with an array of RecordingInput
objects.
Returns the currently-selected recording input. This method can only be called if the Recording
has been prepared.
RecordingInput
A Promise
that is fulfilled with a RecordingInput
object.
Status of the current recording.
RecorderState
Parameter | Type |
---|---|
options(optional) | Partial<RecordingOptions> |
Prepares the recording for recording.
Promise<void>
Parameter | Type | Description |
---|---|---|
seconds | number | The time in seconds to stop recording at. |
Stops the recording once the specified time has elapsed.
void
Parameter | Type | Description |
---|---|---|
inputUid | string | The uid of a |
Sets the current recording input.
void
A Promise
that is resolved if successful or rejected if not.
Parameter | Type | Description |
---|---|---|
seconds | number | The time in seconds to start recording at. |
Starts the recording at the given time.
void
Stop the recording.
Promise<void>
Parameter | Type |
---|---|
source(optional) | AudioSource |
updateInterval(optional) | number |
Creates an instance of an AudioPlayer
that doesn't release automatically.
For most use cases you should use theuseAudioPlayer
hook instead. See the Using theAudioPlayer
directly section for more details.
AudioPlayer
Promise<PermissionResponse>
Promise<PermissionResponse>
Literal Type: string
Acceptable values are: 'default'
| 'amr_nb'
| 'amr_wb'
| 'aac'
| 'he_aac'
| 'aac_eld'
Literal Type: string
Acceptable values are: 'default'
| '3gp'
| 'mpeg4'
| 'amrnb'
| 'amrwb'
| 'aac_adts'
| 'mpeg2ts'
| 'webm'
Property | Type | Description |
---|---|---|
audioSampleUpdate | (data: AudioSample) => void | - |
playbackStatusUpdate | (status: AudioStatus) => void | - |
Property | Type | Description |
---|---|---|
allowsRecording | boolean | Only for: iOS Whether the audio session allows recording. |
interruptionMode | InterruptionMode | Only for: iOS Determines how the audio session interacts with other sessions. |
playsInSilentMode | boolean | Only for: iOS Determines if audio playback is allowed when the device is in silent mode. |
shouldPlayInBackground | boolean | Whether the audio session stays active when the app moves to the background. |
shouldRouteThroughEarpiece | boolean | Only for: Android Whether the audio should route through the earpiece. |
Property | Type | Description |
---|---|---|
channels | AudioSampleChannel[] | - |
timestamp | number | - |
Type: string
or number
or null
or object
shaped as below:
Property | Type | Description |
---|---|---|
assetId(optional) | number | The asset ID of a local audio asset, acquired with the |
headers(optional) | Record<string, string> | An object representing the HTTP headers to send along with the request for a remote audio source.
On web requires the |
uri(optional) | string | A string representing the resource identifier for the audio, which could be an HTTPS address, a local file path, or the name of a static audio file resource. |
Property | Type | Description |
---|---|---|
currentTime | number | - |
didJustFinish | boolean | - |
duration | number | - |
id | number | - |
isBuffering | boolean | - |
isLoaded | boolean | - |
loop | boolean | - |
mute | boolean | - |
playbackRate | number | - |
playbackState | string | - |
playing | boolean | - |
reasonForWaitingToPlay | string | - |
shouldCorrectPitch | boolean | - |
timeControlStatus | string | - |
Literal Type: string
Acceptable values are: 'constant'
| 'longTermAverage'
| 'variableConstrained'
| 'variable'
Literal Type: string
Acceptable values are: 'mixWithOthers'
| 'doNotMix'
| 'duckOthers'
Literal Type: multiple types
Permission expiration time. Currently, all permissions are granted permanently.
Acceptable values are: 'never'
| number
An object obtained by permissions get and request functions.
Property | Type | Description |
---|---|---|
canAskAgain | boolean | Indicates if user can be asked again for specific permission. If not, one should be directed to the Settings app in order to enable/disable the permission. |
expires | PermissionExpiration | Determines time when the permission expires. |
granted | boolean | A convenience boolean that indicates if the permission is granted. |
status | PermissionStatus | Determines the status of the permission. |
Literal Type: string
Acceptable values are: 'low'
| 'medium'
| 'high'
Property | Type | Description |
---|---|---|
canRecord | boolean | - |
durationMillis | number | - |
isRecording | boolean | - |
mediaServicesDidReset | boolean | - |
metering(optional) | number | - |
url | string | null | - |
Property | Type | Description |
---|---|---|
recordingStatusUpdate | (status: RecordingStatus) => void | -status: RecordingStatus |
Property | Type | Description |
---|---|---|
android | RecordingOptionsAndroid | Only for: Android Recording options for the Android platform. |
bitRate | number | The desired bit rate. Example
|
extension | string | The desired file extension. Example
|
ios | RecordingOptionsIos | Only for: iOS Recording options for the iOS platform. |
isMeteringEnabled(optional) | boolean | A boolean that determines whether audio level information will be part of the status object under the "metering" key. |
numberOfChannels | number | The desired number of channels. Example
|
sampleRate | number | The desired sample rate. Example
|
web(optional) | RecordingOptionsWeb | Only for: Web Recording options for the Web platform. |
Property | Type | Description |
---|---|---|
audioEncoder | AndroidAudioEncoder | The desired audio encoder. See the |
extension(optional) | string | The desired file extension. Example
|
maxFileSize(optional) | number | The desired maximum file size in bytes, after which the recording will stop (but Example
|
outputFormat | AndroidOutputFormat | The desired file format. See the |
sampleRate(optional) | number | The desired sample rate. Example
|
Property | Type | Description |
---|---|---|
audioQuality | AudioQuality | number | The desired audio quality. See the |
bitDepthHint(optional) | number | The desired bit depth hint. Example
|
bitRateStrategy(optional) | number | The desired bit rate strategy. See the next section for an enumeration of all valid values of |
extension(optional) | string | The desired file extension. Example
|
linearPCMBitDepth(optional) | number | The desired PCM bit depth. Example
|
linearPCMIsBigEndian(optional) | boolean | A boolean describing if the PCM data should be formatted in big endian. |
linearPCMIsFloat(optional) | boolean | A boolean describing if the PCM data should be encoded in floating point or integral values. |
outputFormat(optional) | string | IOSOutputFormat | number | The desired file format. See the |
sampleRate(optional) | number | The desired sample rate. Example
|
Property | Type | Description |
---|---|---|
bitsPerSecond(optional) | number | - |
mimeType(optional) | string | - |
Property | Type | Description |
---|---|---|
error | string | null | - |
hasError | boolean | - |
id | number | - |
isFinished | boolean | - |
url | string | null | - |