Media Lifecycle
Use this page once a room can connect and you need to understand what happens next: capture, produce, consume, render, and extend.
The lifecycle in one view
- Connect to MediaSFU.
- Create or join a room.
- Capture local sources such as camera, microphone, or screen.
- Produce local media into the room.
- Consume participant media from the room.
- Render it in your UI.
- Layer on recording, translation, boards, permissions, or AI flows.
If you keep the prebuilt UI, MediaSFU manages most of this for you. If you go headless or shared-core, these stages become explicit and you control more of the sequencing.
The two main operating modes
Prebuilt or override-first
Use this when you want MediaSFU to handle most runtime coordination.
Typical actions:
- join the room
- let the SDK manage capture and render flows
- customize cards, controls, menus, and modals
- rely on built-in room state, requests, recording, translation, and board logic
Headless or custom-shell
Use this when you need your own layout and action model.
Typical actions:
- disable the default UI with
returnUI={false} - pass
noUIPreJoinOptionsif you want to bypass the built-in wizard - collect live helpers through
updateSourceParameters - call runtime actions from your own controls
Headless helper example
import { useState } from 'react';
import { MediasfuGeneric } from 'mediasfu-reactjs';
export function CustomShell() {
const [sourceParameters, setSourceParameters] = useState<Record<string, unknown>>({});
return (
<>
<MediasfuGeneric
returnUI={false}
noUIPreJoinOptions={{
action: 'create',
eventType: 'conference',
userName: 'Host',
capacity: 6,
duration: 20,
}}
sourceParameters={sourceParameters}
updateSourceParameters={setSourceParameters}
/>
<button
onClick={() =>
(sourceParameters as { clickVideo?: (params: Record<string, unknown>) => void })
.clickVideo?.({ parameters: sourceParameters })
}
>
Toggle camera
</button>
<button
onClick={() =>
(sourceParameters as { clickAudio?: (params: Record<string, unknown>) => void })
.clickAudio?.({ parameters: sourceParameters })
}
>
Toggle mic
</button>
</>
);
}
This is the entry point for custom produce and control flows without rewriting the room runtime.
Helper calls by framework
The helper shape is the same idea across SDKs: pass the current runtime parameters back into the helper under the parameters key.
React
sourceParameters.clickVideo?.({ parameters: sourceParameters });
sourceParameters.clickAudio?.({ parameters: sourceParameters });
sourceParameters.clickScreenShare?.({ parameters: sourceParameters });
Angular
this.sourceParameters?.clickVideo?.({ parameters: this.sourceParameters });
this.sourceParameters?.clickAudio?.({ parameters: this.sourceParameters });
this.sourceParameters?.clickScreenShare?.({ parameters: this.sourceParameters });
Vue
sourceParameters.value?.clickVideo?.({ parameters: sourceParameters.value });
sourceParameters.value?.clickAudio?.({ parameters: sourceParameters.value });
sourceParameters.value?.clickScreenShare?.({ parameters: sourceParameters.value });
Raw helper calls versus shipped UI surfaces
When a tutorial shows sourceParameters.clickVideo?.({ parameters: sourceParameters }) or parameters.launchRecording?.({ parameters }), treat that as the lowest-level wiring, not as the only supported UX.
- In headless mode, the helper bundle lives in
sourceParameters. - In
customComponent, the same bundle arrives as theparametersprop. - Direct helper calls are ideal for quick toggles and early proof-of-concept controls.
- If MediaSFU already ships the surface you need, prefer importing that component or overriding it rather than rebuilding the entire interaction from scratch.
Recording is the clearest example. launchRecording only triggers the recording flow. If you want the stock recording experience with your own branding, use RecordingModal or ModernRecordingModal, or restyle the modal through uiOverrides.recordingModal.
React example: reuse the shipped recording modal
import { useState } from 'react';
import { ModernRecordingModal } from 'mediasfu-reactjs';
function RecordingControls({ parameters }: { parameters: any }) {
const [showRecording, setShowRecording] = useState(false);
return (
<>
<button onClick={() => setShowRecording(true)}>Recording</button>
<ModernRecordingModal
isVisible={showRecording}
onClose={() => setShowRecording(false)}
parameters={parameters}
position="center"
/>
</>
);
}
If you want to keep the stock recording behavior but restyle the surface, use UI overrides and replace recordingModal instead of rebuilding the recording flow with only raw buttons.
What "produce" usually means in MediaSFU
At the UI SDK level, production usually starts through user actions such as:
- camera toggle
- microphone toggle
- screen share toggle
- whiteboard or screenboard capture
In headless mode, those actions are still available through runtime helpers in sourceParameters.
In shared-core mode, you move closer to the transport and consumer functions directly.
What "consume" usually means in MediaSFU
Consumption is the other half of the room model:
- participant streams arrive
- MediaSFU tracks consumers and transport state
- your UI decides how to render them
- layout helpers or your own custom shell decide what is visible
If you stay prebuilt, the SDK handles the render composition. If you go headless or shared-core, you control more of the render layer and should study mediasfu-shared sooner.
Best entry point by need
| Need | Start here |
|---|---|
| Toggle camera, audio, or screen inside a custom UI shell | UI SDK + returnUI={false} + updateSourceParameters |
| Replace room presentation but keep MediaSFU workflow | Custom component replacement |
| Build your own runtime wrapper around MediaSFU primitives | mediasfu-shared |
| Extend built-in room behavior without replacing the shell | UI overrides |
Advanced layers after the baseline
Once the media lifecycle works, the next high-value layers are:
- recording start, pause, resume, and stop
- translation and subtitle flows
- screenboard and whiteboard sync
- requests, permissions, and panelist workflows
- AI routing, summaries, and multimodal assistants
These are easier to reason about after the base capture -> produce -> consume -> render path is stable.
Common mistakes
- Trying to debug translation or recording before basic media publish/consume is stable.
- Going straight to shared-core without first learning the helper surface in a UI SDK.
- Treating
sourceParametersas static. It is runtime state and should be refreshed throughupdateSourceParameters. - Replacing the UI shell before the secure join flow is working.