WebRtcDevice

interface WebRtcDevice

Basic WebRTC device abstraction.

Inheritors

AndroidWebRtcDevice

Functions

Link copied to clipboard
open suspend fun captureWhiteboardStream(shapesProvider: () -> List<Any>, useImageBackgroundProvider: () -> Boolean = { false }, width: Int = 1280, height: Int = 720, frameRate: Int = 30): MediaStream?

Captures the whiteboard canvas as a video stream for recording. Similar to HTML Canvas's captureStream(30) API in web browsers.

Link copied to clipboard
abstract fun close()

Releases device resources.

Link copied to clipboard

Creates a receive transport from parameters.

Link copied to clipboard

Creates a send transport from parameters.

Link copied to clipboard
open fun createVirtualVideoSource(width: Int, height: Int, frameRate: Int): VirtualVideoSource?

Creates a virtual video stream that can receive processed frames. Used for virtual backgrounds where frames are processed by ML Kit and need to be fed back into a WebRTC stream for sending to remote participants.

Link copied to clipboard

Returns the device's currently loaded RTP capabilities, if available. Implementations can override to provide the platform-specific snapshot; the default returns {@code null} for platforms that do not expose it yet.

Link copied to clipboard
abstract suspend fun enumerateDevices(): List<MediaDeviceInfo>

Enumerates the available media devices.

Link copied to clipboard
abstract suspend fun getDisplayMedia(constraints: Map<String, Any?>): MediaStream

Captures the screen for sharing.

Link copied to clipboard
abstract suspend fun getUserMedia(constraints: Map<String, Any?>): MediaStream

Retrieves a media stream for the given constraints (camera/microphone).

Link copied to clipboard
abstract suspend fun load(rtpCapabilities: RtpCapabilities): Result<Unit>

Loads RTP capabilities for the device.

Link copied to clipboard
open suspend fun setAudioOutputDevice(deviceId: String): Boolean

Sets the audio output device for playback (speaker, Bluetooth, headphones).