This is documentation for the next SDK version. For up-to-date documentation, see the latest version (SDK 53).
A library that provides an API to implement audio playback and recording in apps.
expo-audio
is a cross-platform audio library for accessing the native audio capabilities of the device.
Note that audio automatically stops if headphones/bluetooth audio devices are disconnected.
Installation
-
npx expo install expo-audio
If you are installing this in an existing React Native app, make sure to install expo
in your project.
Configuration in app config
You can configure expo-audio
using its built-in config plugin if you use config plugins in your project (EAS Build or npx expo run:[android|ios]
). The plugin allows you to configure various properties that cannot be set at runtime and require building a new app binary to take effect. If your app does not use EAS Build, then you'll need to manually configure the package.
Example app.json with config plugin
{
"expo": {
"plugins": [
[
"expo-audio",
{
"microphonePermission": "Allow $(PRODUCT_NAME) to access your microphone."
}
]
]
}
}
Configurable properties
Name | Default | Description |
---|---|---|
microphonePermission | "Allow $(PRODUCT_NAME) to access your microphone" | Only for: iOS A string to set the |
Usage
Playing sounds
import { View, StyleSheet, Button } from 'react-native';
import { useAudioPlayer } from 'expo-audio';
const audioSource = require('./assets/Hello.mp3');
export default function App() {
const player = useAudioPlayer(audioSource);
return (
<View style={styles.container}>
<Button title="Play Sound" onPress={() => player.play()} />
<Button
title="Replay Sound"
onPress={() => {
player.seekTo(0);
player.play();
}}
/>
</View>
);
}
const styles = StyleSheet.create({
container: {
flex: 1,
justifyContent: 'center',
backgroundColor: '#ecf0f1',
padding: 10,
},
});
Note: If you're migrating fromexpo-av
, you'll notice thatexpo-audio
doesn't automatically reset the playback position when audio finishes. Afterplay()
, the player stays paused at the end of the sound. To play it again, callseekTo(seconds)
to reset the position — as shown in the example above.
Recording sounds
import { useState, useEffect } from 'react';
import { View, StyleSheet, Button } from 'react-native';
import {
useAudioRecorder,
AudioModule,
RecordingPresets,
setAudioModeAsync,
useAudioRecorderState,
} from 'expo-audio';
export default function App() {
const audioRecorder = useAudioRecorder(RecordingPresets.HIGH_QUALITY);
const recorderState = useAudioRecorderState(audioRecorder);
const record = async () => {
await audioRecorder.prepareToRecordAsync();
audioRecorder.record();
};
const stopRecording = async () => {
// The recording will be available on `audioRecorder.uri`.
await audioRecorder.stop();
};
useEffect(() => {
(async () => {
const status = await AudioModule.requestRecordingPermissionsAsync();
if (!status.granted) {
Alert.alert('Permission to access microphone was denied');
}
setAudioModeAsync({
playsInSilentMode: true,
allowsRecording: true,
});
})();
}, []);
return (
<View style={styles.container}>
<Button
title={recorderState.isRecording ? 'Stop Recording' : 'Start Recording'}
onPress={recorderState.isRecording ? stopRecording : record}
/>
</View>
);
}
const styles = StyleSheet.create({
container: {
flex: 1,
justifyContent: 'center',
backgroundColor: '#ecf0f1',
padding: 10,
},
});
Playing or recording audio in background iOS
On iOS, audio playback and recording in background is only available in standalone apps, and it requires some extra configuration.
On iOS, each background feature requires a special key in UIBackgroundModes
array in your Info.plist file.
In standalone apps this array is empty by default, so to use background features you will need to add appropriate keys to your app.json configuration.
See an example of app.json that enables audio playback in background:
{
"expo": {
...
"ios": {
...
"infoPlist": {
...
"UIBackgroundModes": [
"audio"
]
}
}
}
}
Using the AudioPlayer directly
In most cases, the useAudioPlayer
hook should be used to create a AudioPlayer
instance. It manages the player's lifecycle and ensures that it is properly disposed of when the component is unmounted. However, in some advanced use cases, it might be necessary to create a AudioPlayer
that does not get automatically destroyed when the component is unmounted.
In those cases, the AudioPlayer
can be created using the createAudioPlayer
function. You need to be aware of the risks that come with this approach, as it is your responsibility to call the release()
method when the player is no longer needed. If not handled properly, this approach may lead to memory leaks.
import { createAudioPlayer } from 'expo-audio';
const player = createAudioPlayer(audioSource);
Notes on web usage
- A MediaRecorder issue on Chrome produces WebM files missing the duration metadata. See the open Chromium issue.
- MediaRecorder encoding options and other configurations are inconsistent across browsers, utilizing a Polyfill such as kbumsik/opus-media-recorder or ai/audio-recorder-polyfill in your application will improve your experience. Any options passed to
prepareToRecordAsync
will be passed directly to the MediaRecorder API and as such the polyfill. - Web browsers require sites to be served securely for them to listen to a mic. See MediaDevices
getUserMedia()
security for more details.
API
import { useAudioPlayer, useAudioRecorder } from 'expo-audio';
Constants
Type: Record<string, RecordingOptions>
Constant which contains definitions of the two preset examples of RecordingOptions
, as implemented in the Audio SDK.
HIGH_QUALITY
RecordingPresets.HIGH_QUALITY = {
extension: '.m4a',
sampleRate: 44100,
numberOfChannels: 2,
bitRate: 128000,
android: {
outputFormat: 'mpeg4',
audioEncoder: 'aac',
},
ios: {
outputFormat: IOSOutputFormat.MPEG4AAC,
audioQuality: AudioQuality.MAX,
linearPCMBitDepth: 16,
linearPCMIsBigEndian: false,
linearPCMIsFloat: false,
},
web: {
mimeType: 'audio/webm',
bitsPerSecond: 128000,
},
};
LOW_QUALITY
RecordingPresets.LOW_QUALITY = {
extension: '.m4a',
sampleRate: 44100,
numberOfChannels: 2,
bitRate: 64000,
android: {
extension: '.3gp',
outputFormat: '3gp',
audioEncoder: 'amr_nb',
},
ios: {
audioQuality: AudioQuality.MIN,
outputFormat: IOSOutputFormat.MPEG4AAC,
linearPCMBitDepth: 16,
linearPCMIsBigEndian: false,
linearPCMIsFloat: false,
},
web: {
mimeType: 'audio/webm',
bitsPerSecond: 128000,
},
};
Hooks
Parameter | Type | Description |
---|---|---|
source(optional) | AudioSource | The audio source to load. Can be a local asset via Default: null |
updateInterval(optional) | number | How often (in milliseconds) to emit playback status updates. Defaults to 500ms. Default: 500 |
Creates an AudioPlayer
instance that automatically releases when the component unmounts.
This hook manages the player's lifecycle and ensures it's properly disposed when no longer needed. The player will start loading the audio source immediately upon creation.
AudioPlayer
An AudioPlayer
instance that's automatically managed by the component lifecycle.
Example
import { useAudioPlayer } from 'expo-audio';
function MyComponent() {
const player = useAudioPlayer(require('./sound.mp3'));
return (
<Button title="Play" onPress={() => player.play()} />
);
}
Parameter | Type | Description |
---|---|---|
player | AudioPlayer | The |
Hook that provides real-time playback status updates for an AudioPlayer
.
This hook automatically subscribes to playback status changes and returns the current status. The status includes information about playback state, current time, duration, loading state, and more.
AudioStatus
The current AudioStatus
object containing playback information.
Example
import { useAudioPlayer, useAudioPlayerStatus } from 'expo-audio';
function PlayerComponent() {
const player = useAudioPlayer(require('./sound.mp3'));
const status = useAudioPlayerStatus(player);
return (
<View>
<Text>Playing: {status.isPlaying ? 'Yes' : 'No'}</Text>
<Text>Current Time: {status.currentTime}s</Text>
<Text>Duration: {status.duration}s</Text>
</View>
);
}
Parameter | Type | Description |
---|---|---|
options | RecordingOptions | Recording configuration options including format, quality, sample rate, etc. |
statusListener(optional) | (status: RecordingStatus) => void | Optional callback function that receives recording status updates. |
Hook that creates an AudioRecorder
instance for recording audio.
This hook manages the recorder's lifecycle and ensures it's properly disposed when no longer needed. The recorder is automatically prepared with the provided options and can be used to record audio.
AudioRecorder
An AudioRecorder
instance that's automatically managed by the component lifecycle.
Example
import { useAudioRecorder, RecordingPresets } from 'expo-audio';
function RecorderComponent() {
const recorder = useAudioRecorder(
RecordingPresets.HIGH_QUALITY,
(status) => console.log('Recording status:', status)
);
const startRecording = async () => {
await recorder.prepareToRecordAsync();
recorder.record();
};
return (
<Button title="Start Recording" onPress={startRecording} />
);
}
Parameter | Type | Description |
---|---|---|
recorder | AudioRecorder | The |
interval(optional) | number | How often (in milliseconds) to poll the recorder's status. Defaults to 500ms. Default: 500 |
Hook that provides real-time recording state updates for an AudioRecorder
.
This hook polls the recorder's status at regular intervals and returns the current recording state. Use this when you need to monitor the recording status without setting up a status listener.
RecorderState
The current RecorderState
containing recording information.
Example
import { useAudioRecorder, useAudioRecorderState, RecordingPresets } from 'expo-audio';
function RecorderStatusComponent() {
const recorder = useAudioRecorder(RecordingPresets.HIGH_QUALITY);
const state = useAudioRecorderState(recorder);
return (
<View>
<Text>Recording: {state.isRecording ? 'Yes' : 'No'}</Text>
<Text>Duration: {state.currentTime}s</Text>
<Text>Can Record: {state.canRecord ? 'Yes' : 'No'}</Text>
</View>
);
}
Parameter | Type | Description |
---|---|---|
player | AudioPlayer | The |
listener | (data: AudioSample) => void | Function called with each audio sample containing waveform data. |
Hook that sets up audio sampling for an AudioPlayer
and calls a listener with audio data.
This hook enables audio sampling on the player (if supported) and subscribes to audio sample updates. Audio sampling provides real-time access to audio waveform data for visualization or analysis.
Note: Audio sampling requires
RECORD_AUDIO
permission on Android and is not supported on all platforms.
void
Example
import { useAudioPlayer, useAudioSampleListener } from 'expo-audio';
function AudioVisualizerComponent() {
const player = useAudioPlayer(require('./music.mp3'));
useAudioSampleListener(player, (sample) => {
// Use sample.channels array for audio visualization
console.log('Audio sample:', sample.channels[0].frames);
});
return <AudioWaveform player={player} />;
}
Classes
Type: Class extends SharedObject<AudioEvents>
AudioPlayer Properties
boolean
Boolean value indicating whether audio sampling is supported on the platform.
boolean
Boolean value indicating whether the player is finished loading.
boolean
Boolean value indicating whether the player is currently paused.
boolean
Boolean value indicating whether the player is currently playing.
boolean
A boolean describing if we are correcting the pitch for a changed rate.
AudioPlayer Methods
Parameter | Type | Description |
---|---|---|
seconds | number | The number of seconds to seek by. |
toleranceMillisBefore(optional) | number | The tolerance allowed before the requested seek time, in milliseconds. iOS only. |
toleranceMillisAfter(optional) | number | The tolerance allowed after the requested seek time, in milliseconds. iOS only. |
Seeks the playback by the given number of seconds.
Promise<void>
Parameter | Type | Description |
---|---|---|
rate | number | The playback rate of the audio. |
pitchCorrectionQuality(optional) | PitchCorrectionQuality | The quality of the pitch correction. |
Sets the current playback rate of the audio.
void
Type: Class extends SharedObject<RecordingEvents>
AudioRecorder Properties
boolean
Boolean value indicating whether the recording is in progress.
AudioRecorder Methods
Returns a list of available recording inputs. This method can only be called if the Recording
has been prepared.
RecordingInput[]
A Promise
that is fulfilled with an array of RecordingInput
objects.
Returns the currently-selected recording input. This method can only be called if the Recording
has been prepared.
RecordingInput
A Promise
that is fulfilled with a RecordingInput
object.
Status of the current recording.
RecorderState
Parameter | Type |
---|---|
options(optional) | Partial<RecordingOptions> |
Prepares the recording for recording.
Promise<void>
Parameter | Type | Description |
---|---|---|
seconds | number | The time in seconds to stop recording at. |
Stops the recording once the specified time has elapsed.
void
Parameter | Type | Description |
---|---|---|
inputUid | string | The uid of a |
Sets the current recording input.
void
A Promise
that is resolved if successful or rejected if not.
Parameter | Type | Description |
---|---|---|
seconds | number | The time in seconds to start recording at. |
Starts the recording at the given time.
void
Stop the recording.
Promise<void>
Methods
Parameter | Type |
---|---|
source(optional) | AudioSource |
updateInterval(optional) | number |
Creates an instance of an AudioPlayer
that doesn't release automatically.
For most use cases you should use theuseAudioPlayer
hook instead. See the Using theAudioPlayer
directly section for more details.
AudioPlayer
Checks the current status of recording permissions without requesting them.
This function returns the current permission status for microphone access
without triggering a permission request dialog. Use this to check permissions
before deciding whether to call requestRecordingPermissionsAsync()
.
Promise<PermissionResponse>
A Promise that resolves to a PermissionResponse
object containing the current permission status.
Example
import { getRecordingPermissionsAsync, requestRecordingPermissionsAsync } from 'expo-audio';
const ensureRecordingPermissions = async () => {
const { status } = await getRecordingPermissionsAsync();
if (status !== 'granted') {
// Permission not granted, request it
const { granted } = await requestRecordingPermissionsAsync();
return granted;
}
return true; // Already granted
};
Requests permission to record audio from the microphone.
This function prompts the user for microphone access permission, which is required
for audio recording functionality. On iOS, this will show the system permission dialog.
On Android, this requests the RECORD_AUDIO
permission.
Promise<PermissionResponse>
A Promise that resolves to a PermissionResponse
object containing the permission status.
Example
import { requestRecordingPermissionsAsync } from 'expo-audio';
const checkPermissions = async () => {
const { status, granted } = await requestRecordingPermissionsAsync();
if (granted) {
console.log('Recording permission granted');
} else {
console.log('Recording permission denied:', status);
}
};
Parameter | Type | Description |
---|---|---|
mode | Partial<AudioMode> | Partial audio mode configuration object. Only specified properties will be updated. |
Configures the global audio behavior and session settings.
This function allows you to control how your app's audio interacts with other apps, background playback behavior, audio routing, and interruption handling.
Promise<void>
A Promise that resolves when the audio mode has been applied.
Example
import { setAudioModeAsync } from 'expo-audio';
// Configure audio for background playback
await setAudioModeAsync({
playsInSilentMode: true,
shouldPlayInBackground: true,
interruptionModeAndroid: 'duckOthers',
interruptionMode: 'mixWithOthers'
});
// Configure audio for recording
await setAudioModeAsync({
allowsRecording: true,
playsInSilentMode: false
});
Parameter | Type | Description |
---|---|---|
active | boolean | Whether audio should be active ( |
Enables or disables the audio subsystem globally.
When set to false
, this will pause all audio playback and prevent new audio from playing.
This is useful for implementing app-wide audio controls or responding to system events.
Promise<void>
A Promise that resolves when the audio state has been updated.
Example
import { setIsAudioActiveAsync } from 'expo-audio';
// Disable all audio when app goes to background
const handleAppStateChange = async (nextAppState) => {
if (nextAppState === 'background') {
await setIsAudioActiveAsync(false);
} else if (nextAppState === 'active') {
await setIsAudioActiveAsync(true);
}
};
Event Subscriptions
Parameter | Type | Description |
---|---|---|
player | AudioPlayer | The |
listener | (data: AudioSample) => void | Function called with each audio sample containing waveform data. |
Hook that sets up audio sampling for an AudioPlayer
and calls a listener with audio data.
This hook enables audio sampling on the player (if supported) and subscribes to audio sample updates. Audio sampling provides real-time access to audio waveform data for visualization or analysis.
Note: Audio sampling requires
RECORD_AUDIO
permission on Android and is not supported on all platforms.
void
Example
import { useAudioPlayer, useAudioSampleListener } from 'expo-audio';
function AudioVisualizerComponent() {
const player = useAudioPlayer(require('./music.mp3'));
useAudioSampleListener(player, (sample) => {
// Use sample.channels array for audio visualization
console.log('Audio sample:', sample.channels[0].frames);
});
return <AudioWaveform player={player} />;
}
Types
Literal Type: string
Audio encoder options for Android recording.
Specifies the audio codec used to encode recorded audio on Android. Different encoders offer different quality, compression, and compatibility trade-offs.
Acceptable values are: 'default'
| 'amr_nb'
| 'amr_wb'
| 'aac'
| 'he_aac'
| 'aac_eld'
Literal Type: string
Audio output format options for Android recording.
Specifies the container format for recorded audio files on Android. Different formats have different compatibility and compression characteristics.
Acceptable values are: 'default'
| '3gp'
| 'mpeg4'
| 'amrnb'
| 'amrwb'
| 'aac_adts'
| 'mpeg2ts'
| 'webm'
Event types that an AudioPlayer
can emit.
These events allow you to listen for changes in playback state and receive real-time audio data.
Use player.addListener()
to subscribe to these events.
Property | Type | Description |
---|---|---|
audioSampleUpdate | (data: AudioSample) => void | Fired when audio sampling is enabled and new sample data is available. |
playbackStatusUpdate | (status: AudioStatus) => void | Fired when the player's status changes (play/pause/seek/load and so on.). |
Property | Type | Description |
---|---|---|
allowsRecording(optional) | boolean | Only for: iOS Whether the audio session allows recording. Default: false |
interruptionMode | InterruptionMode | Only for: iOS Determines how the audio session interacts with other sessions. |
interruptionModeAndroid | InterruptionModeAndroid | Only for: Android Determines how the audio session interacts with other sessions on Android. |
playsInSilentMode | boolean | Only for: iOS Determines if audio playback is allowed when the device is in silent mode. |
shouldPlayInBackground(optional) | boolean | Whether the audio session stays active when the app moves to the background. Default: false |
shouldRouteThroughEarpiece | boolean | Only for: Android Whether the audio should route through the earpiece. |
Represents a single audio sample containing waveform data from all audio channels.
Audio samples are provided in real-time when audio sampling is enabled on an AudioPlayer
.
Each sample contains the raw PCM audio data for all channels (mono has 1 channel, stereo has 2).
This data can be used for audio visualization, analysis, or processing.
Property | Type | Description |
---|---|---|
channels | AudioSampleChannel[] | Array of audio channels, each containing PCM frame data. Stereo audio will have 2 channels (left/right). |
timestamp | number | Timestamp of this sample relative to the audio track's timeline, in seconds. |
Represents audio data for a single channel (for example, left or right in stereo audio).
Contains the raw PCM (Pulse Code Modulation) audio frames for this channel. Frame values are normalized between -1.0 and 1.0, where 0 represents silence.
Property | Type | Description |
---|---|---|
frames | number[] | Array of PCM audio frame values, each between -1.0 and 1.0. |
Type: string
or number
or null
or object
shaped as below:
Property | Type | Description |
---|---|---|
assetId(optional) | number | The asset ID of a local audio asset, acquired with the |
headers(optional) | Record<string, string> | An object representing the HTTP headers to send along with the request for a remote audio source.
On web requires the |
uri(optional) | string | A string representing the resource identifier for the audio, which could be an HTTPS address, a local file path, or the name of a static audio file resource. |
Comprehensive status information for an AudioPlayer
.
This object contains all the current state information about audio playback,
including playback position, duration, loading state, and playback settings.
Used by useAudioPlayerStatus()
to provide real-time status updates.
Property | Type | Description |
---|---|---|
currentTime | number | Current playback position in seconds. |
didJustFinish | boolean | Whether the audio just finished playing. |
duration | number | Total duration of the audio in seconds, or 0 if not yet determined. |
id | number | Unique identifier for the player instance. |
isBuffering | boolean | Whether the player is currently buffering data. |
isLoaded | boolean | Whether the audio has finished loading and is ready to play. |
loop | boolean | Whether the audio is set to loop when it reaches the end. |
mute | boolean | Whether the player is currently muted. |
playbackRate | number | Current playback rate (1.0 = normal speed). |
playbackState | string | String representation of the player's internal playback state. |
playing | boolean | Whether the audio is currently playing. |
reasonForWaitingToPlay | string | Reason why the player is waiting to play (if applicable). |
shouldCorrectPitch | boolean | Whether pitch correction is enabled for rate changes. |
timeControlStatus | string | String representation of the player's time control status (playing/paused/waiting). |
Literal Type: string
Bit rate strategies for audio encoding.
Determines how the encoder manages bit rate during recording, affecting file size consistency and quality characteristics.
Acceptable values are: 'constant'
| 'longTermAverage'
| 'variableConstrained'
| 'variable'
Literal Type: string
Audio interruption behavior modes for iOS.
Controls how your app's audio interacts with other apps' audio when interruptions occur. This affects what happens when phone calls, notifications, or other apps play audio.
Acceptable values are: 'mixWithOthers'
| 'doNotMix'
| 'duckOthers'
Literal Type: string
Audio interruption behavior modes for Android.
Controls how your app's audio interacts with other apps' audio on Android. Note that Android doesn't support 'mixWithOthers' mode; audio focus is more strictly managed.
Acceptable values are: 'doNotMix'
| 'duckOthers'
Literal Type: union
Permission expiration time. Currently, all permissions are granted permanently.
Acceptable values are: 'never'
| number
An object obtained by permissions get and request functions.
Property | Type | Description |
---|---|---|
canAskAgain | boolean | Indicates if user can be asked again for specific permission. If not, one should be directed to the Settings app in order to enable/disable the permission. |
expires | PermissionExpiration | Determines time when the permission expires. |
granted | boolean | A convenience boolean that indicates if the permission is granted. |
status | PermissionStatus | Determines the status of the permission. |
Literal Type: string
Pitch correction quality settings for audio playback rate changes.
When changing playback rate, pitch correction can be applied to maintain the original pitch. Different quality levels offer trade-offs between processing power and audio quality.
Acceptable values are: 'low'
| 'medium'
| 'high'
Current state information for an AudioRecorder
.
This object contains detailed information about the recorder's current state,
including recording status, duration, and technical details. This is what you get
when calling recorder.getStatus()
or using useAudioRecorderState()
.
Property | Type | Description |
---|---|---|
canRecord | boolean | Whether the recorder is ready and able to record. |
durationMillis | number | Duration of the current recording in milliseconds. |
isRecording | boolean | Whether recording is currently in progress. |
mediaServicesDidReset | boolean | Whether the media services have been reset (typically indicates a system interruption). |
metering(optional) | number | Current audio level/volume being recorded (if metering is enabled). |
url | string | null | File URL where the recording will be saved, if available. |
Event types that an AudioRecorder
can emit.
These events are used internally by expo-audio
hooks to provide real-time status updates.
Use useAudioRecorderState()
or the statusListener
parameter in useAudioRecorder()
instead of subscribing directly.
Property | Type | Description |
---|---|---|
recordingStatusUpdate | (status: RecordingStatus) => void | Fired when the recorder's status changes (start/stop/pause/error, and so on). |
Represents an available audio input device for recording.
This type describes audio input sources like built-in microphones, external microphones, or other audio input devices that can be used for recording. Each input has an identifying information that can be used to select the preferred recording source.
Property | Type | Description |
---|---|---|
name | string | Human-readable name of the audio input device. |
type | string | Type or category of the input device (for example, 'Built-in Microphone', 'External Microphone'). |
uid | string | Unique identifier for the input device, used to select the input ('Built-in Microphone', 'External Microphone') for recording. |
Property | Type | Description |
---|---|---|
android | RecordingOptionsAndroid | Only for: Android Recording options for the Android platform. |
bitRate | number | The desired bit rate. Example
|
extension | string | The desired file extension. Example
|
ios | RecordingOptionsIos | Only for: iOS Recording options for the iOS platform. |
isMeteringEnabled(optional) | boolean | A boolean that determines whether audio level information will be part of the status object under the "metering" key. |
numberOfChannels | number | The desired number of channels. Example
|
sampleRate | number | The desired sample rate. Example
|
web(optional) | RecordingOptionsWeb | Only for: Web Recording options for the Web platform. |
Recording configuration options specific to Android.
Android recording uses MediaRecorder
with options for format, encoder, and file constraints.
These settings control the output format and quality characteristics.
Property | Type | Description |
---|---|---|
audioEncoder | AndroidAudioEncoder | The desired audio encoder. See the |
extension(optional) | string | The desired file extension. Example
|
maxFileSize(optional) | number | The desired maximum file size in bytes, after which the recording will stop (but Example
|
outputFormat | AndroidOutputFormat | The desired file format. See the |
sampleRate(optional) | number | The desired sample rate. Example
|
Recording configuration options specific to iOS.
iOS recording uses AVAudioRecorder
with extensive format and quality options.
These settings provide fine-grained control over the recording characteristics.
Property | Type | Description |
---|---|---|
audioQuality | AudioQuality | number | The desired audio quality. See the |
bitDepthHint(optional) | number | The desired bit depth hint. Example
|
bitRateStrategy(optional) | number | The desired bit rate strategy. See the next section for an enumeration of all valid values of |
extension(optional) | string | The desired file extension. Example
|
linearPCMBitDepth(optional) | number | The desired PCM bit depth. Example
|
linearPCMIsBigEndian(optional) | boolean | A boolean describing if the PCM data should be formatted in big endian. |
linearPCMIsFloat(optional) | boolean | A boolean describing if the PCM data should be encoded in floating point or integral values. |
outputFormat(optional) | string | IOSOutputFormat | number | The desired file format. See the |
sampleRate(optional) | number | The desired sample rate. Example
|
Recording options for the web.
Web recording uses the MediaRecorder
API, which has different capabilities
compared to native platforms. These options map directly to MediaRecorder
settings.
Property | Type | Description |
---|---|---|
bitsPerSecond(optional) | number | Target bits per second for the recording. |
mimeType(optional) | string | MIME type for the recording (for example, 'audio/webm', 'audio/mp4'). |
Status information for recording operations from the event system.
This type represents the status data emitted by recordingStatusUpdate
events.
It contains high-level information about the recording session and any errors.
Used internally by the event system. Most users should use useAudioRecorderState()
instead.
Property | Type | Description |
---|---|---|
error | string | null | Error message if an error occurred, |
hasError | boolean | Whether an error occurred during recording. |
id | number | Unique identifier for the recording session. |
isFinished | boolean | Whether the recording has finished (stopped). |
url | string | null | File URL of the completed recording, if available. |
Enums
Audio quality levels for recording.
Predefined quality levels that balance file size and audio fidelity. Higher quality levels produce better sound but larger files and require more processing power.
Audio output format options for iOS recording.
Comprehensive enum of audio formats supported by iOS for recording. Each format has different characteristics in terms of quality, file size, and compatibility. Some formats like LINEARPCM offer the highest quality but larger file sizes, while compressed formats like AAC provide good quality with smaller files.