bu-30b-a3b-preview / README.md
mertunsal's picture
Update README.md
9c701ed verified
metadata
pipeline_tag: image-text-to-text
library_name: transformers
language:
  - en
base_model:
  - Qwen/Qwen3-VL-30B-A3B-Instruct
tags:
  - browser_use

BU-30B-A3B-Preview

Shows a black Browser Use Logo in light color mode and a white one in dark color mode.

Meet BU-30B-A3B-Preview β€” bringing SoTA Browser Use capabilities in a small model that can be hosted on a single GPU.

This model is heavily trained to be used with browser-use OSS library and provides comprehensive browsing capabilities with superior DOM understanding and visual reasoning.

Quickstart (BU Cloud)

You can directly use this model at BU Cloud. Simply

  1. Get your API key from BU Cloud
  2. Set environment variable: export BROWSER_USE_API_KEY="your-key"
  3. Install the browser-use library following the instructions here and run
from dotenv import load_dotenv
from browser_use import Agent, ChatBrowserUse
load_dotenv()

llm = ChatBrowserUse(
    model='browser-use/bu-30b-a3b-preview',  # BU Open Source Model!!
)

agent = Agent(
    task='Find the number of stars of browser-use and stagehand. Tell me which one has more stars :)',
    llm=llm,
    flash_mode=True
)
agent.run_sync()

Quickstart (vLLM)

We recommend using this model with vLLM.

Installation

Make sure to install vllm >= 0.12.0:

pip install vllm --upgrade

Serve

A simple launch command is:

vllm serve browser-use/bu-30b-a3b-preview \
  --max-model-len 32768 \
  --host 0.0.0.0 \
  --port 8000

which will create an OpenAI compatible endpoint at localhost that you can use with.

from dotenv import load_dotenv
from browser_use import Agent, ChatOpenAI
load_dotenv()

llm = ChatOpenAI(
    base_url='http://localhost:8000/v1',
    model='browser-use/bu-30b-a3b-preview',
    temperature=0.6,
    top_p=0.95,
    dont_force_structured_output=True, # speed up by disabling structured output
)

agent = Agent(
    task='Find the number of stars of browser-use and stagehand. Tell me which one has more stars :)',
    llm=llm,
)
agent.run_sync()

Model Details

Property Value
Base Model Qwen/Qwen3-VL-30B-A3B-Instruct
Parameters 30B total, 3B active (MoE)
Context Length 32,768 tokens
Architecture Vision-Language Model (Mixture of Experts)

Links