How to Auto-Create HeyGen Videos with n8n (So You Can Stop Copy-Pasting Your Life Away)

How to Auto-Create HeyGen Videos with n8n (So You Can Stop Copy-Pasting Your Life Away)

If you’ve ever made a HeyGen video manually—paste script, pick avatar, pick voice, hit render, wait, download, upload—you already know the truth: it’s not “hard,” it’s just annoying. And repetitive work is where good ideas go to die.

So let’s fix it. In this post, I’ll show you how to automatically create HeyGen videos using n8n: generate a script (usually with GPT), call the HeyGen API to create the video, poll until it’s done, then grab the final URL and ship it wherever you want (Sheets, Slack, YouTube, etc.). This is one of those automations that feels like hiring a tiny robot intern—minus the awkward onboarding.

What we’re building (the simple mental model)

Think of this workflow like a drive-thru:

  • Trigger: “Here’s today’s topic” (schedule, webhook, RSS, Google Sheet row, whatever).
  • Script generator: GPT turns that topic into a tight, spoken script.
  • HeyGen render: We send avatar + voice + script to HeyGen.
  • Polling loop: We check status every 10–30 seconds until it’s ready.
  • Distribution: Save the video URL, notify your team, post it, etc.

That’s it. No mystical AI wizardry. Just plumbing.

Prereqs (don’t skip these, future-you will thank you)

1) A HeyGen account + avatar + voice

You’ll need a HeyGen account, and you’ll want to set up:

  • An AI avatar (often created from a 30–60 second video of yourself).
  • A voice (HeyGen supports voice options and voice cloning; you can also import from ElevenLabs).
  • An API key from your HeyGen dashboard.

Heads up: HeyGen’s free tier has limits (watermarks, and credits that can expire), so treat it like a test environment—not your forever production plan. [1] [4]

2) An n8n instance

You can run n8n self-hosted or use n8n Cloud. Either way, you’ll be wiring nodes together like LEGO for grown-ups. Also: n8n has pre-built workflow templates you can import, which is basically the cheat code. [2] [9]

3) Optional: a content source

If you want this to run automatically, you’ll probably want a source of “what to talk about,” like:

  • RSS feeds
  • Google Sheets / Airtable
  • Tavily for trending news
  • Scraping tools like Apify/Firecrawl

Totally optional. You can start with “manual trigger + topic” and still get 80% of the benefit. [2] [5] [8]

The workflow, step-by-step (with the practical bits)

Step 1: Pick your trigger

This is where the automation starts. Common triggers:

  • Schedule Trigger: daily/weekly videos (my favorite for content ops).
  • Webhook: hit a URL to generate a video on demand (great for internal tools).
  • RSS / Google Sheets: new row/new item = new video.

Ask yourself: do you want “videos on a calendar” or “videos when something happens”? Either works. [2] [7]

Step 2: Generate the script (don’t overthink it)

This is where you use an OpenAI/GPT node (or any LLM) to create a script that’s actually spoken-word friendly. Blog prose is not the same as video narration. If your script reads like a legal document, your avatar will sound like it’s testifying in court.

A simple prompt structure I like:

  • Input: topic + 3 bullet points + desired length (like 60–90 seconds).
  • Output: short hook, 3 beats, quick close + call-to-action.

In n8n, you’ll usually:

  1. Fetch the topic/data (RSS, scraping, Sheets, etc.).
  2. Send it into your GPT node with instructions like: “Turn this into an engaging 1-minute script for an AI avatar.”
  3. Store the result as something like {{$json.script}}.

This “script” field becomes the payload you send to HeyGen. [3] [6] [8]

Step 3: Create the HeyGen video (the core API call)

Now the fun part: you tell HeyGen, “Here’s the avatar, here’s the voice, here’s the script—go make the video.”

You’ve got two options:

  • Use the native HeyGen node in n8n (simpler).
  • Use an HTTP Request node (more flexible, good if you want full control).

Typical fields you’ll send include:

  • avatar_id: your HeyGen avatar ID
  • voice_id: your voice/clone ID
  • script: the text from your GPT node (e.g., {{$json.script}})
  • background: optional image/video URL
  • template_id: optional template for styling/layout

HeyGen will respond with a video_id. That’s your tracking number—like a pizza order. You don’t have the pizza yet; you have proof the pizza exists. [1] [3] [4]

Step 4: Poll until it’s done (because rendering isn’t instant)

Rendering takes time—often 1–5 minutes depending on load and video length. So you need a loop:

  1. Wait node: pause 20–30 seconds.
  2. HTTP Request / HeyGen “Get Video”: check status for that video_id.
  3. If node: if status is completed, continue; otherwise, wait and check again.

Practical tip: don’t poll every second. It’s rude, and it can trigger rate limits. Every 10–30 seconds is fine. [2] [3] [6]

Step 5: Grab the output URL and do something useful with it

Once the status flips to completed, the response should include a video_url (and often a thumbnail_url). Now you can:

  • Write it back to Google Sheets/Airtable (so you’ve got a content log)
  • Send a Slack message like “Video ready” with the link
  • Kick off posting to social platforms (directly or via a scheduler tool)
  • Email yourself an HTML preview button (yes, that’s a thing people do)

This is where n8n shines—distribution is just more nodes. [5] [6] [9]

If you’re new to this, don’t build the mega-workflow on day one. Start with:

  1. Manual Trigger (or Schedule)
  2. Set node: hardcode a topic like “3 AI trends this week”
  3. OpenAI/GPT node: generate a 60-second script
  4. HeyGen Create Video: send avatar_id, voice_id, script
  5. Wait + Poll loop
  6. Slack node: post the final video_url

Once that works, then you swap the topic source from “hardcoded” to “RSS” or “Sheets.” That’s how you avoid the classic automation trap: spending 6 hours building a pipeline before you’ve proven the core render loop works.

Common gotchas (aka stuff I wish people told me sooner)

  • Your script is too long: If you’re on a limited plan, you’ll hit duration/credit issues. Keep early tests short.
  • Voice/avatar IDs: Make sure you’re using the correct IDs from HeyGen dashboard/API. One wrong character = silent failure vibes.
  • Background assets: If you pass a background URL, make sure it’s publicly accessible (or properly hosted).
  • Expect delays: Rendering isn’t instant. Design your n8n workflow like it’ll take a few minutes—because it will. [3] [6]
  • Free tier reality check: Watermarks and expiring credits are fine for testing, not ideal for production. Budget for Pro if this becomes a real channel. [1] [4]

Where this gets spicy (in a good way)

Once you’ve got the basics, you can go full content factory:

  • News recap bot: pull trending topics (Tavily) → script → HeyGen → post. [4] [8]
  • Blog-to-video: RSS trigger → scrape article → summarize → HeyGen with a screenshot background. [5]
  • Multi-platform distribution: one render → auto-post everywhere (or queue it in a scheduler).

Is this going to replace real creators? No. But it will replace the busywork that keeps creators from creating. And I’m aggressively pro-that.

Actionable takeaways (do this next)

  • Build the smallest working version first: manual trigger → GPT script → HeyGen render → poll → get URL.
  • Keep scripts short (60–90 seconds) until you’ve nailed reliability and quotas.
  • Poll politely: check every 10–30 seconds, not constantly.
  • Log every run: store video_id, status, and video_url in Sheets/Airtable for debugging.
  • Then automate the input: RSS, Sheets, scraping—whatever fits your content pipeline.

Sources

  1. HeyGen – product/API and plan limitations overview (watermarks/free tier constraints referenced in HeyGen docs/dashboard materials). https://www.heygen.com/ [1]
  2. n8n – workflow automation platform and templates/resources. https://n8n.io/ [2]
  3. HeyGen API usage pattern – create video + poll status until completed (common API workflow described in HeyGen API guides and community examples). https://docs.heygen.com/ [3]
  4. HeyGen avatar/voice setup notes (avatar creation, voice cloning/import options) – HeyGen documentation/product guides. https://docs.heygen.com/ [4]
  5. Automation patterns: blog/news to video workflows using scraping + AI + video generation (n8n community patterns and integrations such as Apify/Firecrawl + LLM). https://n8n.io/integrations [5]
  6. n8n looping/polling patterns (Wait node + conditional loop) – n8n docs/community examples. https://docs.n8n.io/ [6]
  7. n8n HeyGen node availability/updates – n8n integrations directory and node docs. https://n8n.io/integrations/heygen/ [7]
  8. Tavily / web search tooling for trend/news inputs used in automations. https://tavily.com/ [8]
  9. n8n workflow templates gallery (importable examples). https://n8n.io/workflows [9]