![]()
I just want to try it. Show me the fastest path to a working demo.
You built the voice AI. Your chatbot answers questions. But your users stare at a text bubble while the AI "talks." They want a face — a character that speaks, reacts, and feels alive.
The problem: every avatar SDK you find is either a product page with no code, a 3D photo-to-avatar tool you do not need, or locked into Unity and VR headsets. No one shows you how to get a talking 2D avatar running in your web app.
This tutorial changes that. You will get a talking 2D avatar running in under 10 minutes — and you do not need to write a single line of code. The no-code path uses the MascotBot dashboard and an ElevenLabs agent: configure, deploy, done. If you prefer full control, the React developer path gives you copy-paste SDK code. No 3D modeling. No credit card. Choose your path and go.
Updated for @mascotbot-sdk/react v0.1.8 — February 2026.
What You Will Build
By the end of this tutorial, you will have a talking avatar running in your app:
- A 2D animated character rendered at 120fps via Rive
- Real-time lip sync that matches audio with under 500ms latency
- Expression changes (happy, thinking, surprised) triggered from your code
- A working interactive avatar demo you can fork and customize

Time to complete: approximately 10 minutes. In our testing with 50+ developers, most finish in under 8 minutes.
Choose Your Path
| No-Code Path | React Developer Path | |
|---|---|---|
| Time | ~5 minutes | ~10 minutes |
| What you need | MascotBot subscription + ElevenLabs account | Node.js 18+ + React knowledge |
| Result | Hosted talking avatar via dashboard | Custom React component in your app |
| Best for | Non-technical founders, quick demos | Developers who want full control |
Most users start with the no-code path to validate the experience, then move to the React SDK when they need deeper integration.
No-Code Path: Dashboard Setup (5 Minutes)
You do not need to write code to get a talking avatar. The MascotBot dashboard handles everything — you just configure and deploy.
Step 1 — Set Up Your ElevenLabs Agent
First, create a voice agent in ElevenLabs that your avatar will speak through:
- Go to elevenlabs.io and sign in
- Open the Agents Platform and click Create Agent
- Name your agent and write a system prompt describing its personality
- Under Settings, select your preferred voice and LLM model
- Copy your Agent ID — you will need it in the next step

API key permissions: In your ElevenLabs account settings, make sure your API key has Conversational AI permissions enabled. This is the most common setup issue.

Step 2 — Connect in MascotBot Dashboard
- Go to app.mascot.bot and sign in (or create a free account)
- Navigate to Avatars and choose a pre-made character (Cat, Panda, Girl, or Robot)
- Under Voice Provider, select ElevenLabs
- Paste your ElevenLabs API Key and Agent ID
- Click Save and then Test — your avatar should start speaking


Step 3 — Deploy
Your avatar is now live. You can:
- Embed it on your website using the provided embed code (copy from the dashboard)
- Share the demo link directly with your team
- Try the live playground at mascot.bot/11labs-demo — choose an avatar, paste your API key and Agent ID, and see it work instantly
No server setup. No deployment pipeline. The avatar runs from MascotBot's infrastructure.
Total time: Under 5 minutes from signup to talking avatar.
React Developer Path: SDK Integration (10 Minutes)
If you need the avatar as a React component inside your own app, follow this path. You get full control over rendering, positioning, and interaction logic.
Prerequisites
- Node.js 18+ — download here if you need it
- npm or yarn — comes with Node.js
- A free MascotBot API key — get one from app.mascot.bot (no credit card required)
- Basic React knowledge
Step 4 — Install the MascotBot SDK
The MascotBot SDK is a React package that handles avatar rendering and lip sync. Install it alongside the Rive animation runtime:
# Download the SDK .tgz from your MascotBot dashboard, then:
npm install ./mascotbot-sdk-react-0.1.8.tgz
npm install @rive-app/react-webgl2The SDK is distributed as a .tgz file (not yet on the public npm registry). Download it from your dashboard after signing up.

What these packages do:
@mascotbot-sdk/react— the avatar SDK: rendering, lip sync engine, speech hooks@rive-app/react-webgl2— the Rive animation runtime that renders 2D characters at 120fps in the browser
This is the only 2D avatar SDK built for web-first development. No Unity. No VR headset. No 3D modeling tools.
Store your API key in an environment variable. Never hard-code it in source files:
# .env.local
MASCOT_BOT_API_KEY=your_api_key_here
Step 5 — Render Your First Avatar
Wrap your app with the SDK provider, connect a Rive animation, and see your character on screen.
Add MascotProvider to your app root:
import { MascotProvider } from "@mascotbot-sdk/react";
function App() {
return (
<MascotProvider>
<TalkingAvatar />
</MascotProvider>
);
}MascotProvider wraps your app once at the top level. It manages the SDK's internal state — no props needed.
Create the avatar component:
import { MascotClient } from "@mascotbot-sdk/react";
import { Alignment, Fit, Layout, useRive } from "@rive-app/react-webgl2";
function TalkingAvatar() {
const rive = useRive(
{
src: "/character.riv",
artboard: "Character",
stateMachines: "InLesson",
autoplay: true,
layout: new Layout({
fit: Fit.FitHeight,
alignment: Alignment.Center,
}),
},
{ shouldResizeCanvasToContainer: true }
);
const { RiveComponent } = rive;
return (
<MascotClient rive={rive}>
<div style={{ width: "400px", height: "400px" }}>
<RiveComponent role="img" aria-label="Talking avatar" />
</div>
</MascotClient>
);
}Place your .riv character file in the public/ folder. You can download sample characters from the MascotBot dashboard.
After this step: You should see your character on screen in an idle animation. If you see a blank space, check that your container div has explicit width and height — this is the most common issue.
In our testing with 50+ developers, this step takes under 2 minutes.
Step 6 — Make It Talk: Connecting Audio
Now for the key part — turning your static character into a talking avatar with real-time lip sync.
The useMascotSpeech hook sends text to the MascotBot API, which returns audio and viseme data (mouth shape timings). The SDK plays the audio and drives lip sync automatically — you just call one function:
import { useMascotSpeech } from "@mascotbot-sdk/react";
function SpeechControls() {
const speech = useMascotSpeech({
apiKey: process.env.REACT_APP_MASCOTBOT_API_KEY || "",
apiEndpoint: "https://api.mascot.bot/v1/visemes-audio",
bufferSize: 1,
});
return (
<button onClick={() => speech.addToQueue("Hello! I am your talking avatar.", { voice: "am_fenrir" })}>
Make It Talk
</button>
);
}After this step: Click the button and your character speaks with synchronized mouth movements. The lip sync API processes text into audio and viseme data, with the first response arriving in under 500ms.
Configuration notes:
bufferSize: 1starts playback as soon as the first audio chunk arrives — this gives the fastest responsevoice: "am_fenrir"selects the voice model. Multiple voices are available through the dashboardspeech.stopAndClear()stops speech immediately.speech.clearQueue()clears pending items
The audio must be triggered by a user interaction (click or tap). Browsers block automatic audio playback — this is a web platform security policy, not a bug.
Want to use ElevenLabs voices instead of the built-in TTS? See our ElevenLabs Avatar integration guide for the complete setup.
Step 7 — Add Expressions and Personality
Your avatar can do more than talk. Rive characters support trigger inputs for expressions like thumbs up, waves, and reactions.
import { MascotClient, MascotRive } from "@mascotbot-sdk/react";
function AvatarWithExpressions() {
// ... useRive setup from Step 3 ...
return (
<MascotClient rive={rive} inputs={["thumbs_up", "gesture"]}>
<MascotRive
onClick={({ inputs }) => {
inputs?.thumbs_up?.fire();
}}
/>
</MascotClient>
);
}The inputs prop on MascotClient declares which Rive trigger inputs your code can control. MascotRive renders the canvas and provides an onClick handler with access to those inputs.
The animated mascot's is_speaking input is handled automatically by the SDK — you never need to toggle lip sync manually. Just call addToQueue() and the character talks.
Available trigger names depend on your .riv file's state machine. Common triggers include thumbs_up, gesture, and wave. For custom expressions, see our guide on creating your own brand mascot.
Interactive Playground
Try it yourself without installing anything. This avatar SDK demo runs entirely in the browser:
Fork the playground to experiment:
- Change the character by swapping the
.rivfile - Try different voices by changing the
voiceparameter - Add expression triggers to the
inputsarray - Adjust
bufferSizeto see the latency trade-off
This interactive avatar playground is the fastest way to evaluate the SDK before adding it to your project.
Common Issues and Solutions
Based on our developer support logs, these are the top 3 issues in the first 10 minutes.
Avatar Not Rendering
Symptom: Blank space where the avatar should be — no character visible.
Cause: The container div has no explicit dimensions. The Rive canvas sizes itself to its container, so a container with 0 width or 0 height renders an invisible canvas.
Fix: Add explicit width and height to the container:
// This renders nothing — container has no size
<RiveComponent />
// This works — container has explicit dimensions
<div style={{ width: "400px", height: "400px" }}>
<RiveComponent />
</div>Lip Sync Delay or Out of Sync
Symptom: Mouth movements lag behind the audio by a noticeable amount.
Cause: bufferSize is set too high, or network latency is adding delay.
Fix: Set bufferSize: 1 for the fastest start. If you are on a slow connection, the SDK queues chunks automatically — no additional configuration needed.
Audio Not Playing in Browser
Symptom: Avatar moves its mouth but no sound comes out, or you see a console error about autoplay.
Cause: Browser autoplay policy blocks audio without a user gesture.
Fix: Always trigger speech from a user interaction — a click or tap event. Do not call addToQueue() on page load or inside useEffect:
// Will not work — no user gesture
useEffect(() => { speech.addToQueue("Hello!"); }, []);
// Works — triggered by click
<button onClick={() => speech.addToQueue("Hello!")}>Speak</button>What to Build Next
You have a talking avatar running. Here is what to explore next:
- Add ElevenLabs voice to your avatar — Connect premium AI voices for production-quality speech with natural intonation
- Create your own brand mascot — Replace the default character with your brand's custom animated mascot
- Understand real-time avatar performance — Optimize for under 500ms latency in production deployments
- Explore the full 2D Avatar SDK — Complete SDK reference with advanced features and configuration
Frequently Asked Questions
What is an avatar SDK?
An avatar SDK is a developer toolkit that lets you embed animated, talking characters into web and mobile apps. MascotBot's avatar SDK specializes in 2D animated mascots with real-time lip sync and voice AI integration — install via npm and render a React component in under 10 minutes. No 3D modeling or animation skills required.
How much does an avatar SDK cost?
MascotBot's avatar SDK has a free tier with 1,000 minutes per month — enough for development and testing. Paid plans start at approximately $0.04 per minute, which is 3-5x cheaper than video-based alternatives like HeyGen or D-ID. No credit card is required for the free tier.
Can I use my own character with the avatar SDK?
Yes. MascotBot supports custom Rive characters. Design your own brand mascot in Rive, export it as a .riv file, and import it into the SDK. Your character needs is_speaking (Boolean) and gesture (Trigger) inputs in the state machine. See our Custom Brand Mascot guide for the full workflow.
What is the difference between an avatar SDK and an avatar API?
An avatar SDK is a client-side toolkit (React component, Flutter widget) that renders the avatar in your app. An avatar API is the server-side service that handles voice synthesis, lip sync processing, and character state. MascotBot provides both — the SDK for rendering and the API for intelligence. You install the SDK in your frontend and it communicates with the API automatically.
How do I integrate an avatar SDK with React?
Install @mascotbot-sdk/react via npm, wrap your app with MascotProvider, create a useRive instance with your .riv character file, pass it to MascotClient, and render. The full code is under 20 lines — see Step 3 above for the complete implementation.
