OpenAI just released two open-weight models—gpt-oss-120b and gpt-oss-20b—after months of anticipation (you can try them here).
That means anyone with a MacBook Pro can run a O3 (mini)-level model locally: no Internet connection required.
That’s a big deal, so let’s talk about the implications: what OpenAI’s trying to achieve with open source models (“GPT-OSS”), the future of edge inference, and how Anthropic, Cursor, etc, are impacted.
First, here are the models they just shipped:
gpt-oss-120b: a model on par with O4-mini that costs about 1/20th of Anthropic’s Sonnet 4 (when used via Groq), and outperforms the state of the art Chinese open model with just a fifth of the active weights, making it dramatically cheaper to serve.
gpt-oss-20b: the smaller sibling that can run on a modern MacBook Pro. It performs at about O3-mini’s level, and perfect for doing most day to day tasks.
Notably, it’s been six years since the last time OpenAI released an open weight model (GPT-2 in 2019).
So why come back now to reclaim the open source throne by overachieving substantially?