Available today: gpt-oss-20B Model on Windows with GPU Acceleration – further pushing the boundaries on the edge
With OpenAI’s release of gpt-oss models today, we are thrilled to bring GPU optimized gpt-oss-20B model variants to Windows devices.
This milestone brings powerful, open-source reasoning models to Windows developers, with support for local inference. You can try it out in Foundry Local or AI Toolkit for VS Code (AITK) and start using it in your applications today.
Learn more about what’s possible with Open AI’s gpt-oss models on the Azure blog.
Want to get started on Windows today?
Get gpt-oss-20B up and running on your Windows device in just a few minutes using Foundry Local or AI Toolkit!
To get started with Foundry Local:
- Install Foundry Local via WinGet (recommended) using the following command:
winget install Microsoft.FoundryLocalNote: As an alternative, Foundry Local can also be installed from GitHub.
- Open your Terminal and run the model from the Foundry Local CLI with the following command:
foundry model run gpt-oss-20B - Start sending Foundry Local your prompts!
To get started with AI Toolkit for VS Code:
- If you don’t have it already, install Visual Studio Code here.
- Install AI Toolkit extension.
- Open Model Catalog and download the model gpt-oss-20B.
- Open the Model Playground, load the model, and start sending it prompts!
After exploration via either tool, you can modify prompts, tune inference parameters, and integrate into your app using the Foundry Local SDK.