STOP Paying for AI Video! Unlimited WanVideo Tutorial (SkyReels V3)

Run ComfyUI from anywhere with this complete automated setup guide for Google Colab. Access powerful AI image generation without local hardware using a secure Cloudflared tunnel.

What You’ll Learn

  • How to deploy ComfyUI on Google Colab’s free GPU in minutes
  • Automatic installation of essential custom nodes and models
  • Setting up secure remote access with Cloudflare tunnel
  • Performance optimization for GPU environments
  • Complete one-click automation script

Why ComfyUI on Google Colab?

ComfyUI is a powerful node-based interface for Stable Diffusion workflows. Running it on Google Colab provides several advantages:

šŸš€ Free GPU Access

No expensive hardware required. Google Colab provides free access to NVIDIA GPUs perfect for AI image generation.

ā˜ļø Cloud Flexibility

Access your ComfyUI setup from any device, anywhere. No local installation needed.

⚔ Quick Setup

Automated script handles all dependencies, models, and configurations in minutes.

Prerequisites

Before you begin, make sure you have:

  1. Google Account – Required for Google Colab access
  2. Google Colab Access – Free tier works, but Pro provides longer sessions
  3. Basic Understanding of Python – Helpful but not required
  4. Stable Internet Connection – For model downloads and remote access

šŸ’” Pro Tip: Google Colab free tier has usage limits. For extended sessions, consider upgrading to Colab Pro for uninterrupted access.

Complete Installation Script

This comprehensive script automates the entire ComfyUI setup process. Simply copy and paste it into a new Google Colab notebook cell and execute. The script will:

  • Install all system dependencies and Python packages
  • Clone ComfyUI and lock it to a stable version
  • Install popular custom nodes (Manager, Video tools, Audio processing)
  • Download required AI models automatically
  • Configure optimal GPU memory settings
  • Set up Cloudflare tunnel for secure remote access
  • Launch ComfyUI with optimized parameters

āš ļø Important: Make sure to enable GPU runtime in Colab: Go to Runtime → Change runtime type → Hardware accelerator → GPU (T4) before running the script.

# ===============================
# 1) Install Basic Tools
# ===============================
!apt -q update
!apt -q install -y aria2
!sudo apt-get install -y sox libsox-fmt-all
!pip -q install -U pip setuptools wheel onnxruntime-gpu

# ---- Runtime Parameters ----
PIN_COMMIT = "17e7df43d19bde49efa46a32b89f5153b9cb0ded"   # Specify if you want to lock ComfyUI to a specific commit (leave empty for latest)
PORT = 8188       # ComfyUI listening port

import os, time, socket, threading, json, re, subprocess
from pathlib import Path

# ===============================
# 2) Clone ComfyUI & Install Dependencies
# ===============================
%cd /content
if not os.path.exists("/content/ComfyUI"):
    !git clone https://github.com/comfyanonymous/ComfyUI
%cd /content/ComfyUI

# Lock to specific commit or follow latest
if PIN_COMMIT:
    !git fetch --all -q
    !git reset --hard {PIN_COMMIT}
else:
    !git pull -q

!pip -q install -r requirements.txt

# For low VRAM (optional): Improve CUDA memory allocation behavior
os.environ["PYTORCH_CUDA_ALLOC_CONF"] = "expandable_segments:True"

# Attention optimization (if needed)
!pip -q install sageattention


# ===============================
# 3) Custom Nodes
# ===============================
%cd /content/ComfyUI/custom_nodes

# Install commonly used custom nodes (skip if already exists)
!git clone https://github.com/ltdrdata/ComfyUI-Manager.git || true
!git clone https://github.com/kijai/ComfyUI-MelBandRoFormer.git || true
!git clone https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite.git || true
!git clone https://github.com/kijai/ComfyUI-KJNodes.git || true
!git clone https://github.com/kijai/ComfyUI-WanVideoWrapper.git || true

# If requirements.txt exists under each custom node, install additionally
import os
for d in os.listdir("."):
    req = os.path.join(d, "requirements.txt")
    if os.path.exists(req):
        print(f"[pip] install requirements for {d}")
        !pip -q install -r {req}

!pip install torch torchaudio transformers librosa accelerate

# ===============================
# 4) Download Required Files (Models / Input Materials / Workflow)
# ===============================
%cd /content/ComfyUI

def dl(url, out_path):
    out_dir = os.path.dirname(out_path)
    os.makedirs(out_dir, exist_ok=True)
    cmd = f'aria2c --console-log-level=error -c -x16 -s16 -k1M "{url}" -d "{out_dir}" -o "{os.path.basename(out_path)}"'
    print(cmd)
    code = os.system(cmd)
    if code != 0:
        print("DOWNLOAD FAILED:", url, "->", out_path)
    else:
        print("DOWNLOADED:", out_path)


DL = [
    # diffusion_models
    ("https://huggingface.co/Kijai/WanVideo_comfy_fp8_scaled/resolve/main/SkyReelsV3/Wan21-SkyReelsV3-A2V_fp8_scaled_mixed.safetensors",
     "/content/ComfyUI/models/diffusion_models/Wan21-SkyReelsV3-A2V_fp8_scaled_mixed.safetensors"),

    ("https://huggingface.co/Kijai/MelBandRoFormer_comfy/resolve/main/MelBandRoformer_fp16.safetensors",
     "/content/ComfyUI/models/diffusion_models/MelBandRoformer_fp16.safetensors"),

    # text_encoders
    ("https://huggingface.co/Kijai/WanVideo_comfy/resolve/main/umt5-xxl-enc-bf16.safetensors",
     "/content/ComfyUI/models/text_encoders/umt5-xxl-enc-bf16.safetensors"),

    ("https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/clip_vision/clip_vision_h.safetensors",
     "models/clip_vision/clip_vision_h.safetensors"),

    # audio_encoders
    ("https://huggingface.co/Kijai/wav2vec2_safetensors/resolve/main/wav2vec2-chinese-base_fp16.safetensors",
     "/content/ComfyUI/models/audio_encoders/wav2vec2-chinese-base_fp16.safetensors"),

    # vae
    ("https://huggingface.co/Kijai/WanVideo_comfy/resolve/main/Wan2_1_VAE_bf16.safetensors",
     "/content/ComfyUI/models/vae/Wan2_1_VAE_bf16.safetensors"),
]

for url, path in DL:
    dl(url, path)

# ---- Input materials and workflow (managed separately from models) ----
%cd /content

prefix = "0186"
uuid = "1778e636-c4e8-4c69-9975-b71919ee0407"
root_path = f"https://archive.creativaier.com/comfyui_materials/{prefix}_{uuid}"

# workflow
dl(f"{root_path}/workflow.json", "/content/ComfyUI/user/default/workflows/default.json")

# input
dl(f"{root_path}/audio.wav", "/content/ComfyUI/input/audio.wav")
dl(f"{root_path}/photo.png", "/content/ComfyUI/input/audio.png")

# ===============================
# 6) comfy.settings.json (Optional: Improve UI usability)
# ===============================
settings_path = "/content/ComfyUI/user/default/comfy.settings.json"
x = {
    "Comfy.TutorialCompleted": True,
    "Comfy.Minimap.Visible": False,
    "Comfy.NodeBadge.NodeIdBadgeMode": "Show all",
    "Comfy.NodeBadge.NodeSourceBadgeMode": "Show all"
}
Path(settings_path).parent.mkdir(parents=True, exist_ok=True)
with open(settings_path, "w") as f:
    json.dump(x, f, indent=2)


# ===============================
# 7) cloudflared (External Tunnel)
# ===============================
%cd /content
if not os.path.exists("cloudflared-linux-amd64.deb"):
    !wget -q https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb
!dpkg -i cloudflared-linux-amd64.deb >/dev/null 2>&1 || true

def tunnel_printer(port):
    # Wait until ComfyUI starts before launching tunnel
    while True:
        time.sleep(0.5)
        s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
        try:
            if s.connect_ex(("127.0.0.1", port)) == 0:
                break
        finally:
            s.close()

    print("\nComfyUI is up. Launching cloudflared tunnel...")
    p = subprocess.Popen(
        ["cloudflared", "tunnel", "--url", f"http://127.0.0.1:{port}"],
        stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True
    )
    for line in p.stderr:
        if "trycloudflare.com" in line:
            url = re.findall(r"https?://[^\s]+trycloudflare\.com[^\s]*", line)
            if url:
                print("\nšŸ”— Access ComfyUI:", url[0])


# ===============================
# 8) Start ComfyUI
# ===============================
%cd /content/ComfyUI

threading.Thread(target=tunnel_printer, args=(PORT,), daemon=True).start()
print("\nStarting ComfyUI...")
!python main.py --listen 0.0.0.0 --port {PORT} --dont-print-server

What This Script Does

SectionPurposeDetails
1. System SetupInstall dependenciesaria2 for fast downloads, sox for audio, onnxruntime for GPU acceleration
2. ComfyUI CloneGet ComfyUI sourceClones repository and locks to stable commit for reproducibility
3. Custom NodesExtend functionalityManager, video tools, audio processing, utility nodes automatically installed
4. Model DownloadsAI models setupDownloads diffusion models, encoders, VAE with multi-threaded aria2c
5. UI SettingsOptimize interfacePre-configures ComfyUI settings for better user experience
6. Cloudflare TunnelRemote accessCreates secure HTTPS tunnel for accessing from any device
7. LaunchStart serverRuns ComfyUI with optimal settings on port 8188

Custom Nodes Included

This setup automatically installs these powerful extensions:

ComfyUI-Manager

Essential package manager for installing, updating, and managing custom nodes directly from the UI. Makes it easy to discover and install new extensions.

MelBandRoFormer

Advanced audio separation and processing capabilities. Perfect for isolating vocals, instruments, and other audio sources.

VideoHelperSuite

Comprehensive video frame manipulation tools for creating animations and video processing workflows.

KJNodes

Collection of utility nodes for workflow optimization, including mask operations, conditioning tools, and more.

WanVideoWrapper

Video generation capabilities using state-of-the-art models for creating AI-generated video content.

Accessing Your ComfyUI Instance

Once the script completes execution, you’ll see output similar to this:

ComfyUI is up. Launching cloudflared tunnel...
šŸ”— Access ComfyUI: https://random-words-1234.trycloudflare.com

Click the generated URL to open ComfyUI in your browser. This link works from any device with internet access – your phone, tablet, or another computer.

šŸ”’ Security Note: The Cloudflare tunnel URL is public but randomly generated. While it provides security through obscurity, avoid sharing the link publicly if working with sensitive content.

Performance Optimizations Included

šŸŽÆ Memory Management

  • Expandable CUDA segments for efficient VRAM usage
  • Optimized memory allocation patterns
  • Automatic model offloading when needed

⚔ Speed Enhancements

  • SageAttention for faster inference
  • Multi-threaded downloads with aria2c
  • Pre-optimized model formats (fp8, fp16, bf16)

Troubleshooting Common Issues

GPU Not Available

Problem: ComfyUI runs but image generation is extremely slow.

Solution: Ensure GPU runtime is enabled in Colab. Go to Runtime → Change runtime type → Hardware accelerator → GPU (T4 or better).

Models Not Loading

Problem: ComfyUI starts but no models appear in dropdowns.

Solution: Check download logs for errors. Models must complete downloading before they appear. Check these directories:

  • Diffusion models: models/diffusion_models/
  • Text encoders: models/text_encoders/
  • VAE: models/vae/
  • Audio encoders: models/audio_encoders/

Cloudflare Tunnel Fails

Problem: No public URL appears in the output.

Solution: The tunnel waits for ComfyUI to start. If it takes too long, you can manually run:

!cloudflared tunnel --url http://localhost:8188

Session Disconnects

Problem: Colab disconnects after being idle.

Solution: Free Colab has idle timeout limits. Consider:

  • Upgrading to Colab Pro for 24-hour runtimes
  • Using browser extensions to prevent idle timeout
  • Mounting Google Drive to save progress

Out of Memory Errors

Problem: CUDA out of memory errors during generation.

Solution: Try these approaches:

  • Use lower resolution settings (512×512 instead of 1024×1024)
  • Reduce batch size to 1
  • Enable model offloading in ComfyUI settings
  • Close other running notebooks

Use Cases

This ComfyUI Colab setup is perfect for:

šŸŽØ Artists & Designers

Create AI-generated artwork, videos, and audio content without investing in expensive GPU hardware.

šŸ”¬ Researchers

Experiment with different models, workflows, and techniques in a reproducible cloud environment.

šŸ‘Øā€šŸ’» Developers

Test custom nodes, develop workflows, and prototype AI applications before deploying locally.

Tips for Extended Usage

Saving Your Work

To preserve your workflows and generated images between sessions:

# Add this at the beginning of your notebook to mount Google Drive
from google.colab import drive
drive.mount('/content/drive')

# Then modify the ComfyUI output directory
!ln -s /content/drive/MyDrive/ComfyUI_Output /content/ComfyUI/output

Adding More Models

To download additional models, add entries to the DL list in the script:

# Example: Adding SDXL model
("https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0.safetensors",
 "/content/ComfyUI/models/checkpoints/sd_xl_base_1.0.safetensors"),

Customizing the Setup

You can customize the script by modifying these parameters:

  • PIN_COMMIT – Set to empty string "" to use latest ComfyUI version
  • PORT – Change if 8188 conflicts with other services
  • Add/remove custom nodes by editing the git clone section
  • Modify the DL array to include your preferred models

Conclusion

This automated setup transforms Google Colab into a powerful, cloud-based ComfyUI workstation accessible from anywhere. With free GPU access, pre-configured custom nodes, and automatic model downloads, you can start creating AI-generated content in minutes without any local hardware requirements.

The Cloudflare tunnel provides secure remote access, making it easy to work on your projects from any device. Whether you’re creating art, developing workflows, or experimenting with new AI models, this setup offers a portable and reproducible environment that eliminates the complexity of local installation.

Frequently Asked Questions

How long does the setup take?

The complete setup takes approximately 5-15 minutes depending on your internet speed and the size of models being downloaded. Model downloads are the most time-consuming part, but aria2c’s multi-threaded downloading speeds this up significantly.

Can I use this with Colab’s free tier?

Yes! The script works perfectly with Colab’s free tier. However, free tier has session time limits (usually 12 hours) and idle timeout restrictions. For production use or longer sessions, Colab Pro ($10/month) is recommended.

Are my workflows saved between sessions?

By default, no. Colab’s runtime is ephemeral. To save workflows, mount Google Drive and configure ComfyUI to save there. The script includes a default workflow that loads automatically on startup.

Can I share the Cloudflare URL with others?

Yes, you can share the URL with collaborators. However, be aware that everyone will share the same GPU resources, and Colab free tier doesn’t support heavy concurrent usage. The URL changes each time you restart the tunnel.

What models are downloaded by default?

The script downloads WanVideo models for video generation, MelBandRoFormer for audio processing, text encoders (UMT5), CLIP vision models, audio encoders (Wav2Vec2), and VAE models. All models are in optimized formats (fp8, fp16, bf16) for better performance.

How do I update ComfyUI or custom nodes?

Use ComfyUI Manager (included in the setup) to update custom nodes from within the interface. To update ComfyUI itself, either remove the PIN_COMMIT value or change it to a newer commit hash.

Related Resources

Found this guide helpful? Share it with others who want to explore AI image generation on Google Colab!