An Archaeological Python Adventure
[45, 23, 67, 12, 89, 34, 56, 78, 21, 43]
Categories: <30, 30-60, >60 chars
str(number)
len(list)
sum(list)
[x for x in list if ...]
Ask me anything about Python, archaeology, or the investigation...
Ollama doesn't appear to be running. View setup instructions →
The in-browser AI assistant uses Qwen 2.5 Coder (1.5B), a coding-focused language model that runs entirely in your browser using WebGPU.
Initializing...
This may take a few minutes depending on your connection.
The AI assistant is now available. You can use it for hints, debugging, and code explanations.
An error occurred while setting up the model.
Ollama doesn't appear to be running. Would you like to:
Run Qwen 2.5 Coder directly in your browser. No installation required!
Install Ollama for faster responses and more model options.
The easiest option! Select "In-Browser (WebGPU)" in AI Settings to run a model directly in your browser.
Click the ⚙️ Settings button in the footer to try it!
For faster responses and more model choices, install Ollama on your computer.
Download and install Ollama from https://ollama.ai
curl -fsSL https://ollama.ai/install.sh | sh
To allow this website () to connect to your local Ollama server, you need to set environment variables:
⚠️ Important: Stop Ollama first if it's already running, then set the environment variable and restart it.
set OLLAMA_ORIGINS=
ollama serve
$env:OLLAMA_ORIGINS=""
Win + R
sysdm.cpl
OLLAMA_ORIGINS
export OLLAMA_ORIGINS=""
echo 'export OLLAMA_ORIGINS=""' >> ~/.bashrc
source ~/.bashrc
launchctl setenv OLLAMA_ORIGINS ""
Download a model suitable for coding assistance:
ollama pull qwen2.5-coder:7b
ollama pull granite3.1-dense:8b
ollama pull llama3.2:3b
Once Ollama is running with CORS configured:
💡 Auto-connect: Your settings are saved! Next time you visit, the AI will automatically connect if Ollama is running.
Ctrl+C
ollama pull model-name
Let's learn how to use the code editor.
Excellent work!