
AIOpenCode
1 min read
Deploying Qwen3-Coder-30B-A3B on 8GB GPU with Docker
A 30B model on an 8GB GPU sounds impossible, but quantization and llama.cpp make it work. This guide shows how to run it with Docker and use it in OpenCode.
Browse all articles tagged with Docker. Found 3 articles covering this topic.

A 30B model on an 8GB GPU sounds impossible, but quantization and llama.cpp make it work. This guide shows how to run it with Docker and use it in OpenCode.
Private, self-healing Playwright test loop on WSL2 using llama.cpp via Docker, GPU acceleration, and OpenCode agents
A practical guide to transforming sluggish, flaky Playwright test suites into fast, reliable CI/CD pipelines using Docker containers and intelligent parallelization strategies.